Django - 如何将数据添加到ManyToMany字段模型?(Django - How to add data to a ManyToMany field model?)
我有以下模型,视图和模板:
models.py:
class Batch(models.Model): material_id = models.ManyToManyField(AssetMetadata) user = models.ForeignKey(User) def __str__(self): return 'Batch_' + str(self.pk) + '_' + self.user.username class AssetMetadata(models.Model): material_id = models.CharField(max_length=256, blank=True) series_title = models.CharField(max_length=256, blank=True) season_title = models.CharField(max_length=256, blank=True) season_number = models.IntegerField(default=0) episode_title = models.CharField(max_length=256, blank=True) episode_number = models.IntegerField(default=0) synopsis = models.TextField(max_length=1024, blank=True) ratings = models.CharField(max_length=256, blank=True) def __str__(self): return self.material_id
views.py:
def assets_in_repo(request): asset_list = AssetMetadata.objects.order_by('id').all() page = request.GET.get('page', 1) paginator = Paginator(asset_list, 50) try: assets = paginator.page(page) except PageNotAnInteger: assets = paginator.page(1) except EmptyPage: assets = paginator.page(paginator.num_pages) if request.method == 'POST': batch_list = request.POST.getlist('batch') print(batch_list) return render(request, 'assets_db/assets.html', {'assets': assets})
来自模板的片段:
<form method="post">{% csrf_token %} <input type="submit" value="Create Batch" align="right"> <table class="table table-striped" id="myTable"> <tr> <th>ID</th> <th>Material ID</th> <th>Series Title</th> <th>Season Tile</th> <th>Season Number</th> <th>Episode Title</th> <th>Episode Number</th> <th>Create Batch</th> </tr> {% for i in assets %}<tr> <td>{{i.pk}}</td> <td><a href="/repo/{{ i.id}}">{{i.material_id}}</a></td> <td>{{i.series_title}}</td> <td>{{i.season_title}}</td> <td>{{i.season_number}}</td> <td>{{i.episode_title}}</td> <td>{{i.episode_number}}</td> <td> <input type="checkbox" name="batch" value="{{i.pk}}"> </td> </tr>{% endfor %} </table> </form>
我试图从复选框中获取数据并将其保存在
Batch
模型中。用户选择资产来创建批处理,表单在
AssetMetadata
返回这些资产的pk
,并且选择存储在通过batch_list = request.POST.getlist('batch')
创建的列表中。 我想使用此列表中存储的数据在Batch
创建一个新条目,然后链接到AssetMetadata
的资产pk
。我已经能够在Django管理页面中成功完成此操作,但我最好在视图中执行此操作。
我已阅读https://docs.djangoproject.com/en/1.10/ref/models/relations/并搜索stackoverflow但我很难过如何做到这一点。
I have the following models, view and template:
models.py:
class Batch(models.Model): material_id = models.ManyToManyField(AssetMetadata) user = models.ForeignKey(User) def __str__(self): return 'Batch_' + str(self.pk) + '_' + self.user.username class AssetMetadata(models.Model): material_id = models.CharField(max_length=256, blank=True) series_title = models.CharField(max_length=256, blank=True) season_title = models.CharField(max_length=256, blank=True) season_number = models.IntegerField(default=0) episode_title = models.CharField(max_length=256, blank=True) episode_number = models.IntegerField(default=0) synopsis = models.TextField(max_length=1024, blank=True) ratings = models.CharField(max_length=256, blank=True) def __str__(self): return self.material_id
views.py:
def assets_in_repo(request): asset_list = AssetMetadata.objects.order_by('id').all() page = request.GET.get('page', 1) paginator = Paginator(asset_list, 50) try: assets = paginator.page(page) except PageNotAnInteger: assets = paginator.page(1) except EmptyPage: assets = paginator.page(paginator.num_pages) if request.method == 'POST': batch_list = request.POST.getlist('batch') print(batch_list) return render(request, 'assets_db/assets.html', {'assets': assets})
snippet from template:
<form method="post">{% csrf_token %} <input type="submit" value="Create Batch" align="right"> <table class="table table-striped" id="myTable"> <tr> <th>ID</th> <th>Material ID</th> <th>Series Title</th> <th>Season Tile</th> <th>Season Number</th> <th>Episode Title</th> <th>Episode Number</th> <th>Create Batch</th> </tr> {% for i in assets %}<tr> <td>{{i.pk}}</td> <td><a href="/repo/{{ i.id}}">{{i.material_id}}</a></td> <td>{{i.series_title}}</td> <td>{{i.season_title}}</td> <td>{{i.season_number}}</td> <td>{{i.episode_title}}</td> <td>{{i.episode_number}}</td> <td> <input type="checkbox" name="batch" value="{{i.pk}}"> </td> </tr>{% endfor %} </table> </form>
I am trying to get the data provided from the checkbox and save it in the
Batch
model.The user selects assets to create a batch, the form returns the
pk
for these assetss inAssetMetadata
and the selections are stored in a list create viabatch_list = request.POST.getlist('batch')
. I want to use the data stored in this list to create a new entry inBatch
which then links to the assetpk
inAssetMetadata
.I have been able to do this successfully in the Django admin page but i would ideally do this in teh view.
I have read https://docs.djangoproject.com/en/1.10/ref/models/relations/ and search stackoverflow but i am stumped as how to do this.
原文:https://stackoverflow.com/questions/44409524
最满意答案
实际上它有
trees
属性:import org.apache.spark.ml.attribute.NominalAttribute import org.apache.spark.ml.classification.{ RandomForestClassificationModel, RandomForestClassifier, DecisionTreeClassificationModel } val meta = NominalAttribute .defaultAttr .withName("label") .withValues("0.0", "1.0") .toMetadata val data = sqlContext.read.format("libsvm") .load("data/mllib/sample_libsvm_data.txt") .withColumn("label", $"label".as("label", meta)) val rf: RandomForestClassifier = new RandomForestClassifier() .setLabelCol("label") .setFeaturesCol("features") val trees: Array[DecisionTreeClassificationModel] = rf.fit(data).trees.collect { case t: DecisionTreeClassificationModel => t }
正如您所看到的,唯一的问题是使类型正确,以便我们可以实际使用这些:
trees.head.transform(data).show(3) // +-----+--------------------+-------------+-----------+----------+ // |label| features|rawPrediction|probability|prediction| // +-----+--------------------+-------------+-----------+----------+ // | 0.0|(692,[127,128,129...| [33.0,0.0]| [1.0,0.0]| 0.0| // | 1.0|(692,[158,159,160...| [0.0,59.0]| [0.0,1.0]| 1.0| // | 1.0|(692,[124,125,126...| [0.0,59.0]| [0.0,1.0]| 1.0| // +-----+--------------------+-------------+-----------+----------+ // only showing top 3 rows
注意 :
如果您使用管道,您也可以提取单个树:
import org.apache.spark.ml.Pipeline val model = new Pipeline().setStages(Array(rf)).fit(data) // There is only one stage and know its type // but lets be thorough val rfModelOption = model.stages.headOption match { case Some(m: RandomForestClassificationModel) => Some(m) case _ => None } val trees = rfModelOption.map { _.trees // ... as before }.getOrElse(Array())
Actually it has
trees
attribute:import org.apache.spark.ml.attribute.NominalAttribute import org.apache.spark.ml.classification.{ RandomForestClassificationModel, RandomForestClassifier, DecisionTreeClassificationModel } val meta = NominalAttribute .defaultAttr .withName("label") .withValues("0.0", "1.0") .toMetadata val data = sqlContext.read.format("libsvm") .load("data/mllib/sample_libsvm_data.txt") .withColumn("label", $"label".as("label", meta)) val rf: RandomForestClassifier = new RandomForestClassifier() .setLabelCol("label") .setFeaturesCol("features") val trees: Array[DecisionTreeClassificationModel] = rf.fit(data).trees.collect { case t: DecisionTreeClassificationModel => t }
As you can see the only problem is to get types right so we can actually use these:
trees.head.transform(data).show(3) // +-----+--------------------+-------------+-----------+----------+ // |label| features|rawPrediction|probability|prediction| // +-----+--------------------+-------------+-----------+----------+ // | 0.0|(692,[127,128,129...| [33.0,0.0]| [1.0,0.0]| 0.0| // | 1.0|(692,[158,159,160...| [0.0,59.0]| [0.0,1.0]| 1.0| // | 1.0|(692,[124,125,126...| [0.0,59.0]| [0.0,1.0]| 1.0| // +-----+--------------------+-------------+-----------+----------+ // only showing top 3 rows
Note:
If you work with pipelines you can extract individual trees as well:
import org.apache.spark.ml.Pipeline val model = new Pipeline().setStages(Array(rf)).fit(data) // There is only one stage and know its type // but lets be thorough val rfModelOption = model.stages.headOption match { case Some(m: RandomForestClassificationModel) => Some(m) case _ => None } val trees = rfModelOption.map { _.trees // ... as before }.getOrElse(Array())
相关问答
更多-
TCP/IP模型是一个________。[2023-10-02]
a -
下列中不属于面向对象的编程语言的是?[2022-05-30]
a -
是的,两个结论是正确的,虽然在scikit学习中的随机森林实现使得可以启用或禁用引导重采样。 实际上,RF通常比ET更紧凑。 从计算的角度来看,ET通常比较便宜,但可以增长得更大。 ET可以有时比RF更广泛,但是很难猜测在没有尝试第一个(并通过交叉验证的网格搜索调整n_estimators , max_features和min_samples_split )的情况下。 Yes both conclusions are correct, although the Random Forest implement ...
-
sklearn RandomForestClassifier活动路径或结束节点(sklearn RandomForestClassifier active paths or ended nodes)[2022-12-30]
您可以使用随机林的apply方法(请参阅docs )来获取林中每棵树最终所在叶子的索引。这是调用决策树估计器的tree_属性的apply方法。 您可以在源中找到有关底层树对象的更多信息。 values属性为每个节点提供了例如最终进入该叶子的训练点的标签。 You can use the apply method (see docs) of the random forest, to get for each tree in the forest the index of the leaf it ended ... -
如何访问Spark RandomForest中的个人预测?(How to access individual predictions in Spark RandomForest?)[2022-05-11]
PySpark MLlib模型不提供直接访问此信息的方式。 理论上,您可以尝试直接提取模型并单独预测每棵树: from pyspark.mllib.tree import DecisionTreeMode numTrees = 3 trees = [DecisionTreeModel(model._java_model.trees()[i]) for i in range(numTrees)] predictions = [t.predict(testData) for t in trees] ... -
RandomForest是使用平均来进行预测的集合方法,即, 通常 (但不总是)在多数表决集合中使用所有拟合的子分类器以达到最终预测。 这通常适用于所有集合方法。 正如Vivek Kumar在评论中指出的那样,预测不一定总是纯粹的多数投票,但也可以是加权多数或实际上结合个体预测的其他外来形式(对集合方法的研究正在进行中,尽管深度学习有些偏见)。 没有可以绘制的平均树,只有从整个数据集的随机子样本和每个生成的预测中训练的决策树桩。 这是预测本身的平均值,而不是树木/树桩。 仅仅为了完整性,来自维基百科文章 : ...
-
让我们首先修复导入以消除歧义 import org.apache.spark.ml.classification.RandomForestClassifier import org.apache.spark.ml.feature.{StringIndexer, VectorIndexer} import org.apache.spark.ml.{Pipeline, PipelineStage} import org.apache.spark.ml.linalg.Vectors 我将使用您使用的相同数据: ...
-
所以我发现没有可以使用的predict()方法。 相反,我们需要使用transform()方法进行预测。 只需删除标签列并创建一个新的数据框。 例如,就我而言,我做了, pred = sqlContext.createDataFrame([("What are liver enzymes ?" ,)], ["sentence"]) prediction = model.transform(pred) 然后我们可以使用select()方法找到预测结果。 至少现在,这个解决方案为我成功地工作。 请让我知道是 ...
-
在scikit-learn问题跟踪器上发布问题后,我得到的反馈是问题出在我使用的“预测”功能中。 应该是“pred_test = clr.predict_proba(X_test)[:,1]”而不是“pred_test = clr.predict(X_test)”,因为分类问题是二进制:0或1。 实施更改后,结果对于WEKA和scikit的随机森林来说是相同的:) After posting the question at scikit-learn issue tracker, I got feedback ...
-
实际上它有trees属性: import org.apache.spark.ml.attribute.NominalAttribute import org.apache.spark.ml.classification.{ RandomForestClassificationModel, RandomForestClassifier, DecisionTreeClassificationModel } val meta = NominalAttribute .defaultAttr .w ...