Tensorflow Session.Run()Tensor对象不可调用(Tensorflow Session.Run() Tensor object is not callable)
我有一个使用tensorflow的ptb_word训练ptb示例的rnn模型。 Bellow我有一个代码,我正在尝试打印一些示例来测试训练模型。 我收到错误
TypeError: 'Tensor' object is not callable
在我生成probs, state = sess.run([mtest.output_probs(), mtest._final_state], feed_dict=feed_dict)
的行上运行此代码时TypeError: 'Tensor' object is not callable
probs, state = sess.run([mtest.output_probs(), mtest._final_state], feed_dict=feed_dict)
究竟是什么导致了这个错误
这是代码:
import numpy as np import os import tensorflow as tf from ptb_word_lm import * from tensorflow.models.rnn.ptb import reader from tensorflow.python.platform import gfile data_path = "/home/usr/simple-examples/data/" raw_data = reader.ptb_raw_data(data_path) train_data, valid_data, test_data, vocabulary = raw_data test_path = os.path.join(data_path, "ptb.test.txt") word_to_id = reader._build_vocab(test_path) eval_config = get_config() eval_config.batch_size = 1 eval_config.num_steps = 1 sess = tf.Session() initializer = tf.random_uniform_initializer(-eval_config.init_scale, eval_config.init_scale) test_input = PTBInput(config=eval_config, data=test_data, name="TestInput") with tf.variable_scope("model", reuse=None, initializer=initializer): mtest = PTBModel(is_training=False, config=eval_config, input_=test_input) sess.run(tf.initialize_all_variables()) saver = tf.train.import_meta_graph('/home/usr/models/medium/model.ckpt-50979.meta') ckpt = tf.train.get_checkpoint_state('/home/usr/models/medium/') if ckpt and gfile.Exists(ckpt.model_checkpoint_path): msg = 'Reading model parameters from %s' % ckpt.model_checkpoint_path print(msg) saver.restore(sess, ckpt.model_checkpoint_path) def pick_from_weight(weight, pows=1.0): weight = weight**pows t = np.cumsum(weight) s = np.sum(weight) return int(np.searchsorted(t, np.random.rand(1) * s)) while True: number_of_sentences = 10 sentence_cnt = 0 text = '\n' end_of_sentence_char = word_to_id['<eos>'] input_char = np.array([[end_of_sentence_char]]) state = sess.run(mtest.initial_state) for attr in mtest.__dict__: print attr print 'all attributes above' while sentence_cnt < number_of_sentences: feed_dict = {mtest._input: input_char, mtest.initial_state: state} probs, state = sess.run([mtest.output_probs(), mtest._final_state], feed_dict=feed_dict) print 'after state' sampled_char = pick_from_weight(probs[0]) print sampled_char if sampled_char == end_of_sentence_char: text += '.\n' sentence_cnt += 1 else: text += ' ' + id_to_word[sampled_char] input_char = np.array([[sampled_char]]) print(text) raw_input('press any key to continue ...')
I have a rnn model trained for the ptb example with tensorflow's ptb_word. Bellow I have a code where I'm trying to print a few examples to test the model trained. I'm getting a error
TypeError: 'Tensor' object is not callable
when running this code on the line I makeprobs, state = sess.run([mtest.output_probs(), mtest._final_state], feed_dict=feed_dict)
What exactly causes this error?
here is the code:
import numpy as np import os import tensorflow as tf from ptb_word_lm import * from tensorflow.models.rnn.ptb import reader from tensorflow.python.platform import gfile data_path = "/home/usr/simple-examples/data/" raw_data = reader.ptb_raw_data(data_path) train_data, valid_data, test_data, vocabulary = raw_data test_path = os.path.join(data_path, "ptb.test.txt") word_to_id = reader._build_vocab(test_path) eval_config = get_config() eval_config.batch_size = 1 eval_config.num_steps = 1 sess = tf.Session() initializer = tf.random_uniform_initializer(-eval_config.init_scale, eval_config.init_scale) test_input = PTBInput(config=eval_config, data=test_data, name="TestInput") with tf.variable_scope("model", reuse=None, initializer=initializer): mtest = PTBModel(is_training=False, config=eval_config, input_=test_input) sess.run(tf.initialize_all_variables()) saver = tf.train.import_meta_graph('/home/usr/models/medium/model.ckpt-50979.meta') ckpt = tf.train.get_checkpoint_state('/home/usr/models/medium/') if ckpt and gfile.Exists(ckpt.model_checkpoint_path): msg = 'Reading model parameters from %s' % ckpt.model_checkpoint_path print(msg) saver.restore(sess, ckpt.model_checkpoint_path) def pick_from_weight(weight, pows=1.0): weight = weight**pows t = np.cumsum(weight) s = np.sum(weight) return int(np.searchsorted(t, np.random.rand(1) * s)) while True: number_of_sentences = 10 sentence_cnt = 0 text = '\n' end_of_sentence_char = word_to_id['<eos>'] input_char = np.array([[end_of_sentence_char]]) state = sess.run(mtest.initial_state) for attr in mtest.__dict__: print attr print 'all attributes above' while sentence_cnt < number_of_sentences: feed_dict = {mtest._input: input_char, mtest.initial_state: state} probs, state = sess.run([mtest.output_probs(), mtest._final_state], feed_dict=feed_dict) print 'after state' sampled_char = pick_from_weight(probs[0]) print sampled_char if sampled_char == end_of_sentence_char: text += '.\n' sentence_cnt += 1 else: text += ' ' + id_to_word[sampled_char] input_char = np.array([[sampled_char]]) print(text) raw_input('press any key to continue ...')
原文:https://stackoverflow.com/questions/42002083
最满意答案
对于统计问题:当然,它可能发生,要么您的数据噪音很小,要么在评论中提到的方案时钟奴隶。
对于分类器的导入,你可以
pickle
它(用pickle
模块将它保存为二进制文件,然后只需在需要时加载它并对新数据使用clf.predict()
方法clf.predict()
import pickle #Do the classification and name the fitted object clf with open('clf.pickle', 'wb') as file : pickle.dump(clf,file,pickle.HIGHEST_PROTOCOL)
然后你可以加载它
import pickle with open('clf.pickle', 'rb') as file : clf =pickle.load(file) # Now predict on the new dataframe df as pred = clf.predict(df.values)
For the statistic question: of course, it can happen, either your data is having little noise or the scenario Clock Slave mentioned in the comments.
For the import of the classifier, you could
pickle
it ( save it as a binary with thepickle
module, and then just load it whenever you need it and use theclf.predict()
method on the new dataimport pickle #Do the classification and name the fitted object clf with open('clf.pickle', 'wb') as file : pickle.dump(clf,file,pickle.HIGHEST_PROTOCOL)
And then later you can load it
import pickle with open('clf.pickle', 'rb') as file : clf =pickle.load(file) # Now predict on the new dataframe df as pred = clf.predict(df.values)
相关问答
更多-
第一个问题的答案是肯定的,它对结果的影响取决于算法。 我的建议是密切关注基于类的统计数据,例如召回和精确度(在classification_report找到)。 对于RandomForest()您可以查看讨论样本权重参数的此线程 。 一般来说, sample_weight是你在scikit-learn寻找scikit-learn 。 对于SVM查看此示例或此示例 。 对于NB分类器,这应该由贝叶斯规则隐式处理,但是在实践中你可能会看到一些糟糕的表现。 对于你的第二个问题,它需要讨论,我个人将我的数据分解为训 ...
-
使用逻辑回归进行预测(Python Sci Kit Learn)(Making predictions with logistic regression (Python Sci Kit Learn))[2024-04-06]
对于统计问题:当然,它可能发生,要么您的数据噪音很小,要么在评论中提到的方案时钟奴隶。 对于分类器的导入,你可以pickle它(用pickle模块将它保存为二进制文件,然后只需在需要时加载它并对新数据使用clf.predict()方法clf.predict() import pickle #Do the classification and name the fitted object clf with open('clf.pickle', 'wb') as file : pickle.dump ... -
显然, assumed_to_be_the_feature_ids_of_the_top_k_features不能与feature-id值对应 - 因为(见下文)我的输入文件中的feature-id值从1开始。 实际上,他们是。 SVMlight格式加载器将检测到您的输入文件具有基于一个的索引,并将从每个索引中减去一个索引,以免浪费列。 如果那不是你想要的,那么将zero_based=True传递给load_svmlight_file ,假装它实际上是从零开始并插入一个额外的列; 请参阅其文档了解详细信息 ...
-
简而言之,“针对这个问题”的答案是 np.dot((np.dot(X_test[0],reg.coefs_[0]) +reg.intercepts_[0] ),reg.coefs_[1]) + reg.intercepts_[1] 漫长的答案如下(更长的版本在这里: https : //www.mohammadathar.com/blog/2017/2/15/a-different-look-at-neural-networks ) 感知器神经网络可以像这样建模: 真正意义的是,“用权重乘以输入值,然后加 ...
-
试试sklearn.preprocessing.scale 。 这里不需要基于类的缩放器。 沿任意轴标准化数据集。 以均值和分量方式为中心,以单位方差为中心。 您可以这样使用: from sklearn.preprocessing import scale df = pd.DataFrame({'col1' : np.random.randn(10), 'col2' : np.arange(10, 30, 2), 'col3' : ...
-
如何在sci-kit学习库中惩罚一个类别的SVM?(How to penalize the SVM for one category in sci-kit learn library?)[2023-05-18]
在SVC中, fit方法中的关键字class-weight将参数C为C*value ,其中value>0 。 实际上, class weight是class_label: value形式的字典。 请参阅http://scikit-learn.org/stable/modules/svm.html(sec:1.4.1.3 ) In SVC, keyword class-weight in the fit method sets the parameter C to C*value where value>0. ... -
所需的数据是具有形状(n_samples, n_features)的numpy数组 (在这种情况下为“矩阵” (n_samples, n_features) 。 使用numpy.genfromtxt将csv文件读取为正确格式的简单方法。 也参考这个帖子 。 让csv文件的内容(比如当前工作目录中的file.csv )为: a,b,c,target 1,1,1,0 1,0,1,0 1,1,0,1 0,0,1,1 0,1,1,0 加载它我们做 data = np.genfromtxt('file.csv', ...
-
好的,看看这个例子: In [123]: #data y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8] print metrics.explained_variance_score(y_true, y_pred) print metrics.r2_score(y_true, y_pred) 0.957173447537 0.948608137045 In [124]: #what explained_variance_score really is 1-n ...
-
是的,您必须将X转换为数字表示(不是必需的二进制)以及y。 这是因为所有机器学习方法都在数字矩阵上运行。 怎么做到这一点? 如果Col1中的每个样本都可以包含不同的单词(即它代表一些文本) - 您可以使用CountVectorizer转换该列 from sklearn.feature_extraction.text import CountVectorizer col1 = ["cherry banana", "apple appricote", "cherry apple", "banana apple ...
-
我认为问题在于数据被分解为训练和测试的方式。 您已将前3000个样本用于培训,其余190个样本用于测试。 我发现通过这样的训练,分类器为所有测试样本产生真正的类标签( 得分 = 1.0)。 我还注意到数据集的最后190个样本具有相同的类标签 ,即'N' 。 因此,您获得的结果是正确的。 我建议你将数据集拆分为train并使用test_size=.06通过ShuffleSplit进行测试(这大约相当于190/3190,尽管为了使结果更容易,我在下面的示例运行中使用了test_size=.01 )。 为简单起见 ...