我应该从PHP中删除表单中的临时文件发送(Should I remove temporary file send from form in PHP)
我正在玩各种ajax上传者。 在分析他们的服务器端代码时,我看到这样的事情:
@unlink($_FILES['file']['tmp_name'])
它要么静音(如上所述),所以什么也不做(在我的情况下)或取消静音,所以抛出警告,禁止访问临时文件夹(在我的情况下)并中断脚本的执行。
我错过了什么? 我总是被告知,我们不应该触摸通过PHP表单传输的临时文件 。 因为这是不必要的(并禁止某些时间,就像我的情况一样)。 当脚本结束时,PHP将执行所有清理 - 即删除所有上传的临时文件。
上面的代码是什么原因? 是否适用于这种情况,当脚本中断时,PHP会因某些严重错误而停止,因此无法删除临时文件? 还是有另一个原因?
编辑 :很遗憾,即使在Plupload示例代码中我也发现了这种错误。
I'm playing with various ajax uploaders. When analyzing their server-side code, I see something like this:
@unlink($_FILES['file']['tmp_name'])
It is either muted one (like above), so does nothing (in my case) or unmuted one, so throws a warning, that access to temporary folder is prohibited (in my case) and breaks execution of a script.
What am I missing? I was always told, that we should not touch temporary files transmited via PHP form. Because this is unnecessary (and somethimes prohibited, like in my case). PHP will do all the cleaning, when script ends -- i.e. remove all uploaded temporary files.
What is the reason in code like above? Is it for the case, when script breaks, PHP is halted with some critical error and thus isn't able to remove temporary files? Or there is another reason?
Edit: It is quite pity, that I found this kind of mistake even in Plupload example code.
原文:
最满意答案
这是一种方法(我假设有一个拼写错误,你想要的是
x3 * (mynet(x2) - mynet(x1))
?):import tensorflow as tf import numpy as np class MLP: def __init__(self, x1, x2, sizes, activations): x_sizes = [tf.shape(x1)[0], tf.shape(x2)[0]] last_out = tf.concat([x1, x2], axis=0) self.layers = [] for l, size in enumerate(sizes[1:]): self.layers.append(last_out) last_out = tf.layers.dense(last_out, size, activation=activations[l], kernel_initializer=tf.glorot_uniform_initializer()) self.layers.append(last_out) self.x1_eval, self.x2_eval = tf.split(last_out, x_sizes, axis=0) def main(): session = tf.Session() dim = 3 nn_sizes = [dim, 15, 1] nn_activations = [tf.nn.tanh, tf.nn.tanh, tf.identity] w = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='mynet') x1 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x1') x2 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x2') x3 = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='x3') mynet = MLP(x1, x2, nn_sizes, nn_activations) myfun = tf.reduce_sum(tf.multiply(x3, (mynet.x2_eval - mynet.x1_eval))) optimizer = tf.contrib.opt.ScipyOptimizerInterface(myfun,var_list=w) n = 1000 x1_samples = np.asmatrix(np.random.rand(n,dim)) x2_samples = np.asmatrix(np.random.rand(n,dim)) x3_samples = np.asmatrix(np.random.rand(n,1)) session.run(tf.global_variables_initializer()) print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples})) optimizer.minimize(session, {x1: x1_samples, x2: x2_samples, x3: x3_samples}) print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples})) if __name__ == '__main__': main()
Here's one approach (I assume there is a typo and what you want is
x3 * (mynet(x2) - mynet(x1))
?):import tensorflow as tf import numpy as np class MLP: def __init__(self, x1, x2, sizes, activations): x_sizes = [tf.shape(x1)[0], tf.shape(x2)[0]] last_out = tf.concat([x1, x2], axis=0) self.layers = [] for l, size in enumerate(sizes[1:]): self.layers.append(last_out) last_out = tf.layers.dense(last_out, size, activation=activations[l], kernel_initializer=tf.glorot_uniform_initializer()) self.layers.append(last_out) self.x1_eval, self.x2_eval = tf.split(last_out, x_sizes, axis=0) def main(): session = tf.Session() dim = 3 nn_sizes = [dim, 15, 1] nn_activations = [tf.nn.tanh, tf.nn.tanh, tf.identity] w = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='mynet') x1 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x1') x2 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x2') x3 = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='x3') mynet = MLP(x1, x2, nn_sizes, nn_activations) myfun = tf.reduce_sum(tf.multiply(x3, (mynet.x2_eval - mynet.x1_eval))) optimizer = tf.contrib.opt.ScipyOptimizerInterface(myfun,var_list=w) n = 1000 x1_samples = np.asmatrix(np.random.rand(n,dim)) x2_samples = np.asmatrix(np.random.rand(n,dim)) x3_samples = np.asmatrix(np.random.rand(n,1)) session.run(tf.global_variables_initializer()) print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples})) optimizer.minimize(session, {x1: x1_samples, x2: x2_samples, x3: x3_samples}) print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples})) if __name__ == '__main__': main()
相关问答
更多-
TensorFlow将所有操作存储在操作图上。 该图定义了哪些函数输出到哪里,并将它们链接在一起,以便它可以按照您在图中设置的步骤生成最终输出。 如果您尝试在一张图上输入张量或操作到另一张图上的张量或操作,它将会失败。 一切都必须在同一个执行图上。 尝试with tf.Graph().as_default():删除with tf.Graph().as_default(): 如果您没有指定图表,TensorFlow会为您提供一个默认图表。 您可能在一个地方使用了默认图形,而在您的训练模块中使用了不同的图形。 ...
-
接受nodejs中的输入(Accepting input in nodejs)[2022-11-18]
我会这样做: function test(testcase) { ask(function(workers) { ask(function(salary) { console.log(salary); // Do whatever you want with the data // Then: ask(test); }); }); } ask(test); 标准输入没有同步IO API。 但是,您 ... -
TensorFlow:如何通过复制一个张量来连接张量?(TensorFlow: How to concatenate tensors by duplicating one of the tensor?)[2022-04-25]
函数tf.tile可以帮助你做到这一点。 点击这里获得功能的细节。 import numpy as np import tensorflow as tf t1 = np.reshape(np.arange(3*4*5), [3,4,5]) t2 = np.reshape(np.arange(3*1*2), [3,2]) # Keep t1 stay t1_p = tf.placeholder(tf.float32, [3,4,5]) # Change t2 from shape(3,2) to sha ... -
这可能是张量流keras后端中的错误或不受支持的情况:会话全局缓存并且不会被清除。 您可以通过以下方式手动清除它: from tensorflow.contrib.keras.python.keras.backend import clear_session clear_session() ......在train调用之间。 原因很简单:第二次train调用使用新节点构建一个新图形,但是引擎盖下的会话保持了上一个图形,这使得它们不兼容。 更新 。 在最新的tensorflow中,keras已被移动到另一个 ...
-
Tensorflow Session.Run()Tensor对象不可调用(Tensorflow Session.Run() Tensor object is not callable)[2022-07-21]
查看GitHub上引用的代码我找不到output_props ,所以版本可能不同。 但是,由于mtest.initial_state是mtest.initial_state ,我假设mtest.output_props也是一个。 也就是说,试试 probs, state = sess.run([mtest.output_probs, mtest.final_state], feed_dict=feed_dict) 相反,即不使用括号。 mtest._final_state也是一个内部变量,不应该直接使用。 ... -
KeyError:张量变量,请参阅不存在的张量(KeyError : The tensor variable , Refer to the tensor which does not exists)[2023-07-31]
对于graph.get_tensor_by_name("prediction:0")的工作,您应该在创建它时将其命名。 这是你如何命名它 prediction = tf.nn.softmax(tf.matmul(last,weight)+bias, name="prediction") 如果您已经对模型进行了训练并且无法重新命名张量,那么您仍然可以通过其默认名称来获取张量, y_pred = graph.get_tensor_by_name("Reshape_1:0") 如果Reshape_1不是张量的 ... -
张量流量输入变量误差(tensor flow input variable error)[2022-03-09]
您的占位符a被定义为float32类型,但[5, 8] float32 [5, 8]包含int值。 run_graph([2., 8.])或run_graph(np.array([5, 8], dtype=np.float32))应该工作。 Your placeholder a is defined as being of type float32 but [5, 8] contain int values. run_graph([2., 8.]) or run_graph(np.array([5, 8], ... -
Tensor-Flow错误:无法将
类型的对象转换为Tensor(Tensor-Flow error: Failed to convert object of type [2022-05-07]to Tensor) 错误来自这一行: tf.summary.scalar('weights',weights) 。 tf.summary.scalar的输入应该是张量而不是字典。 所以为了保存你的体重,你需要做的是: tf.summary.scalar('weights_h1',weights['encoder_h1']) The error is from this line: tf.summary.scalar('weights',weights). The input to the tf.summary.scalar ... -
这是一种方法(我假设有一个拼写错误,你想要的是x3 * (mynet(x2) - mynet(x1)) ?): import tensorflow as tf import numpy as np class MLP: def __init__(self, x1, x2, sizes, activations): x_sizes = [tf.shape(x1)[0], tf.shape(x2)[0]] last_out = tf.concat([x1, x2], a ...
-
我不认为有一个简单的方法来做你想做的事。 fake_quant_with_min_max_args的min和max参数将转换为操作属性,并在底层内核的构造中使用 。 它们不能在运行时更改。 有一些(看起来不是公共API的一部分) 操作 (参见LastValueQuantize和MovingAvgQuantize ),根据他们看到的数据来调整它们的间隔,但它们并不完全符合你的要求。 您可以编写自己的自定义操作,或者如果您认为这通常是有价值的,请在github上提交功能请求。 I don't think the ...