'on子句'中的MySQL未知列:((MySQL Unknown column in 'on clause' :()
我的查询看起来像
SELECT а.*, m.username, m.picture, m.picture_active FROM questions_answer AS а INNER JOIN members AS m ON а.poster_id=m.member_id INNER JOIN questions AS q ON q.question_id=a.question_id ORDER BY a.postdate DESC
我收到错误:
Unknown column 'a.question_id' in 'on clause'
我不知道有什么问题,请帮帮我。
表
questions
是CREATE TABLE IF NOT EXISTS `questions` ( `question_id` int(9) unsigned NOT NULL AUTO_INCREMENT, `member_id` int(9) unsigned NOT NULL DEFAULT '0', `question` text NOT NULL, `postdate` int(10) unsigned NOT NULL DEFAULT '0', `active` tinyint(1) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (`question_id`), KEY `member_id` (`member_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
和
questions_answer
是CREATE TABLE IF NOT EXISTS `questions_answer` ( `answer_id` bigint(12) unsigned NOT NULL AUTO_INCREMENT, `question_id` int(9) unsigned NOT NULL, `poster_id` int(9) unsigned NOT NULL, `body` text NOT NULL, `postdate` int(9) unsigned NOT NULL, PRIMARY KEY (`answer_id`), KEY `question_id` (`question_id`), KEY `poster_id` (`poster_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
My query looks like
SELECT а.*, m.username, m.picture, m.picture_active FROM questions_answer AS а INNER JOIN members AS m ON а.poster_id=m.member_id INNER JOIN questions AS q ON q.question_id=a.question_id ORDER BY a.postdate DESC
I'm getting error:
Unknown column 'a.question_id' in 'on clause'
I don't know what is wrong, please help me with this.
Table
questions
isCREATE TABLE IF NOT EXISTS `questions` ( `question_id` int(9) unsigned NOT NULL AUTO_INCREMENT, `member_id` int(9) unsigned NOT NULL DEFAULT '0', `question` text NOT NULL, `postdate` int(10) unsigned NOT NULL DEFAULT '0', `active` tinyint(1) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (`question_id`), KEY `member_id` (`member_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
and
questions_answer
isCREATE TABLE IF NOT EXISTS `questions_answer` ( `answer_id` bigint(12) unsigned NOT NULL AUTO_INCREMENT, `question_id` int(9) unsigned NOT NULL, `poster_id` int(9) unsigned NOT NULL, `body` text NOT NULL, `postdate` int(9) unsigned NOT NULL, PRIMARY KEY (`answer_id`), KEY `question_id` (`question_id`), KEY `poster_id` (`poster_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
原文:https://stackoverflow.com/questions/18814468
更新时间:2022-11-19 11:11
最满意答案
这是一个已知的错误,应该用Cassandra-6309修复
This is a known bug which should be fixed with Cassandra-6309
相关问答
更多-
使用fs shell命令: fs -rm -f -r /path/to/dir : load_resource_csv = LOAD '/user/cloudera/newfile' USING PigStorage(',') AS (name:chararray, skill:chararray ); fs -rm -r -f /user/hive/warehouse/stack/ STORE load_resource_csv INTO '/user/hive/warehouse/sta ...
-
懒惰的cassandra加载火花(Lazy cassandra load with spark)[2021-09-08]
默认情况下,所有RDD都是惰性的。 在你召集一个动作之前,他们实际上不会做任何事情。 所以不要添加延迟,因为这只会延迟围绕RDD创建元数据而不会实际影响执行。 例 val table = sparkContext.cassandraTable[Type](keyspace,tableName) val tableWithWhere = table.where("x = 5") val tableTransformed = table.map( x:Type => turnXIntoY(x) ) //noth ... -
您需要使用UDF将UUID / TimeUUID的二进制字节转换为chararray。 不要试图像AS一样直接将其定义为chararray(bucket:chararray,id:chararray,app_id:chararray,event:chararray); 或者您可以使用https://github.com/cevaris/pig-dse/blob/master/src/main/java/com/dse/pig/udfs/AbstractCassandraStorage.java将UUID / ...
-
PIG - HBASE - 投射值(PIG - HBASE - Casting values)[2022-08-15]
需要添加以下选项才能使其投射... LOAD 'hbase://TABLE' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('CF:I','-caster HBaseBinaryConverter') AS (product_id:bytearray); 感谢这篇文章 。 Needed to add the following option to get it to cast... LOAD 'hbase://TABLE' USING org ... -
卡桑德拉与猪的复合键(Composite key in Cassandra with Pig)[2023-09-22]
Hadoop正在使用CqlPagingRecordReader尝试加载您的数据。 这导致查询与您输入的查询不同。 寻呼记录阅读器试图一次获得一小片Cassandra数据以避免超时。 这意味着您的查询将执行为 SELECT * FROM "data" WHERE token("occurday","seqnumber") > ? AND token("occurday","seqnumber") <= ? AND occurday='A Great Day' AND seqnumber=1 LIMIT 10 ... -
事实证明我的问题与我正在寻找的答案非常不同。 首先,创建UDF确实对此有用。 但有两件事是我们使用的是自定义Cql存储例程,它没有考虑BigDecimal。 其次,经过进一步探索后,由于一个单独的原因,事实证明问题出在cqlsh上。 这些领域实际上并没有失去精确度 。 但是, 我的整体问题的答案正是这里列出的: Astyanax Cassandra双精度型 It turns out my problem was very different than the answer I was looking for ...
-
我认为你必须使用下面的命令来输入grunt shell。 pig -useHCatalog 请参阅此链接, 该链接显示为了处理配置单元,我们必须使用-use HCatalog,它从hive lib注册所有必需的jar。 第二个建议:尝试使用以下命令: STORE part3 into 'pedestrian_count' USING org.apache.hive.hcatalog.pig.HCatStorer(); 替换part3如下: part3 = FOREACH part2 GENERATE F ...
-
用Pig加载Hbase表。(Loading Hbase table with Pig. Float gives FIELD_DISCARDED_TYPE_CONVERSION_FAILED)[2024-01-02]
原来你必须添加一个施法者。 像这样: USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('family:value', '-loadKey true -limit 5 -caster HBaseBinaryConverter') Turns out you have to add a caster. Like so: USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('family:va ... -
这是一个已知的错误,应该用Cassandra-6309修复 This is a known bug which should be fixed with Cassandra-6309
-
在测试时,Hadoop只是可选的。 为了做任何规模的任何事情你也需要hadoop。 没有hadoop运行意味着你在本地模式下运行猪。 这基本上意味着所有数据都由您运行的同一个进程处理。这适用于单个节点和示例数据。 当运行任何大量数据或多台机器时,您希望以hadoop模式运行pig。 通过在cassandra节点上运行hadoop任务跟踪器,pig可以利用map reduce减少提供的优势,分配工作负载并使用数据局部性来减少网络传输。 Hadoop is only optional when you are ...