Phoenix Build失败了。(Phoenix Build fail. Cannot find Symbol. Phoenix 4.8.2 Hbase-1.2 cdh 5.9.1)
我正在Cloudera的Hadoop和Hbase安装上安装Apache Phoenix 4.8.2-Hbase-1.2(即两者都通过cdh 5.9.1安装)
遵循这些说明来构建Phoenix: 使用Phoenix和Cloudera Hbase(从repo安装)
现在当我运行
sudo mvn install -DskipTests
我在凤凰核心编译中遇到这个错误:
[INFO] Reactor Summary: [INFO] [INFO] Apache Phoenix .................................... SUCCESS [5.569s] [INFO] Phoenix Core ...................................... FAILURE [2:30.148s] [INFO] Phoenix - Flume ................................... SKIPPED [INFO] Phoenix - Pig ..................................... SKIPPED [INFO] Phoenix Query Server Client ....................... SKIPPED [INFO] Phoenix Query Server .............................. SKIPPED [INFO] Phoenix - Pherf ................................... SKIPPED [INFO] Phoenix - Spark ................................... SKIPPED [INFO] Phoenix - Hive .................................... SKIPPED [INFO] Phoenix Client .................................... SKIPPED [INFO] Phoenix Server .................................... SKIPPED [INFO] Phoenix Assembly .................................. SKIPPED [INFO] Phoenix - Tracing Web Application ................. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2:37.862s [INFO] Finished at: Fri Jan 20 13:02:44 IST 2017 [INFO] Final Memory: 68M/714M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on project phoenix-core: Compilation failure: Compilation failure: [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[222,29] cannot find symbol [ERROR] symbol: variable QUEUE_NAME [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[226,32] cannot find symbol [ERROR] symbol: variable MAP_MEMORY_MB [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[227,29] cannot find symbol [ERROR] symbol: variable MAP_JAVA_OPTS [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[229,54] cannot find symbol [ERROR] symbol: variable QUEUE_NAME [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[230,39] cannot find symbol [ERROR] symbol: variable MAP_MEMORY_MB [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[231,39] cannot find symbol [ERROR] symbol: variable MAP_JAVA_OPTS [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :phoenix-core
它无法在org.apache.hadoop.mapreduce.MRJobConfig类中找到
QUEUE_NAME
MAP_MEMORY_MB
和MAP_JAVA_OPTS
。 cloudera hadoop 2.6不包含这些变量吗?此外,
public static void updateCapacityQueueInfo(Configuration conf)
只有这个public static void updateCapacityQueueInfo(Configuration conf)
函数正在使用这些变量并导致编译失败。 这是为Capacity调度程序提供信息的函数。 从凤凰代码中评论此函数及其调用是否安全,因为我只安装这些函数用于本地测试,而不是用于生产环境。否则还有什么可能是解决方案? 不是hadoop 2.6和凤凰4.8.2兼容吗?
I'm installing Apache Phoenix 4.8.2-Hbase-1.2 over my Cloudera's Hadoop and Hbase installation (i.e both are installed through cdh 5.9.1)
Followed these instructions to build Phoenix over it: Using Phoenix with Cloudera Hbase (installed from repo)
Now when i run
sudo mvn install -DskipTests
I get this error in phoenix-core compilation:
[INFO] Reactor Summary: [INFO] [INFO] Apache Phoenix .................................... SUCCESS [5.569s] [INFO] Phoenix Core ...................................... FAILURE [2:30.148s] [INFO] Phoenix - Flume ................................... SKIPPED [INFO] Phoenix - Pig ..................................... SKIPPED [INFO] Phoenix Query Server Client ....................... SKIPPED [INFO] Phoenix Query Server .............................. SKIPPED [INFO] Phoenix - Pherf ................................... SKIPPED [INFO] Phoenix - Spark ................................... SKIPPED [INFO] Phoenix - Hive .................................... SKIPPED [INFO] Phoenix Client .................................... SKIPPED [INFO] Phoenix Server .................................... SKIPPED [INFO] Phoenix Assembly .................................. SKIPPED [INFO] Phoenix - Tracing Web Application ................. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2:37.862s [INFO] Finished at: Fri Jan 20 13:02:44 IST 2017 [INFO] Final Memory: 68M/714M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on project phoenix-core: Compilation failure: Compilation failure: [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[222,29] cannot find symbol [ERROR] symbol: variable QUEUE_NAME [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[226,32] cannot find symbol [ERROR] symbol: variable MAP_MEMORY_MB [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[227,29] cannot find symbol [ERROR] symbol: variable MAP_JAVA_OPTS [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[229,54] cannot find symbol [ERROR] symbol: variable QUEUE_NAME [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[230,39] cannot find symbol [ERROR] symbol: variable MAP_MEMORY_MB [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] /opt/apache-phoenix-4.8.2-HBase-1.2-src-copy/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixMRJobUtil.java:[231,39] cannot find symbol [ERROR] symbol: variable MAP_JAVA_OPTS [ERROR] location: interface org.apache.hadoop.mapreduce.MRJobConfig [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :phoenix-core
Its not able to find
QUEUE_NAME
MAP_MEMORY_MB
andMAP_JAVA_OPTS
in org.apache.hadoop.mapreduce.MRJobConfig class. Doesn't cloudera hadoop 2.6 contain these variables?Also only this
public static void updateCapacityQueueInfo(Configuration conf)
function inPhoenixMRJobUtil.java
is using these variables and causing the compilation failure. This is the function providing info for Capacity scheduler. Is it safe to comment this function and its calls from the phoenix code, since I'm installing these for local testing only and not for production environment.Otherwise what else could be the resolution? Aren't hadoop 2.6 and phoenix 4.8.2 compatible?
原文:https://stackoverflow.com/questions/41758521
最满意答案
让我们说
byte low = 50, high = 100
。表达式
low + high
将首先将两者都提升为int
,然后添加它们,从而得到值150 (int)
。在版本1中,然后将
150 (int)
为byte
,即值-106 (byte)
。 溢出 。 与+
相同,/
运算符将两侧提升为int
,因此它变为-106 (int)
,除以2
时为-53 (int)
。 最后再次转换为byte
,最后以-53 (byte)
结束。在版本2中,您将
150 (int)
除以2
,并且因为两边都已经是int
值,所以不进行任何提升,最后是75 (int)
。 将其转换为byte
给出75 (byte)
。 没有溢出 。Lets say that
byte low = 50, high = 100
.The expression
low + high
will first promote both toint
, then add them, resulting in value150 (int)
.In version 1, you then cast
150 (int)
tobyte
, which is value-106 (byte)
. Overflow. Same as for+
, the/
operator will promote both sides toint
, so it becomes-106 (int)
, which is-53 (int)
when divided by2
. Finally you cast tobyte
again, ending up with-53 (byte)
.In version 2, you divide
150 (int)
by2
, and since both sides are alreadyint
values, no promotion is done, ending up with75 (int)
. Casting that tobyte
gives you75 (byte)
. No overflow.
相关问答
更多-
您的问题是当您删除一个节点时,您将已删除父节点的子节点指针设置为已删除节点的子节点,但您没有将已删除父节点的子指针设置为已删除节点的子节点。 例如: if(direction == 'l') { node* deleted = *iterate; (*iterate)->parent->left = (*iterate)->right; deleted->right->parent = deleted->parent; free ...
-
二进制搜索比较(Binary Search comparison)[2023-10-17]
这是一个边缘情况,这不起作用:大小为1的列表。 在这种情况下,我们将L == U == 0 。 即使那个元素恰好是你正在寻找的元素,因为while条件不满足< ,你的元素永远不会被找到。 Here is an edge case where this would not work: a list of size 1. In this case, we would have L == U == 0. Even if that one element happened to be the one for whi ... -
这里有一些我可以想到的: 当确定下一个间隔的边界时, 逐个错误 处理重复的项目 ,如果您想要返回数组中的第一个相等的项目,而是返回一个后续的相等项目 计算索引时数字下溢/溢出 ,数组庞大 递归与非递归实现,您应该考虑的设计选择 这些是你想到的吗? Here are some I can think of: Off-by-one errors, when determining the boundary of the next interval Handling of duplicate items, if ...
-
我的同事尼克·帕兰特(Nick Parlante)推荐这篇文章 (当他还在斯坦福时)。 结构不同的二叉树(问题12)的计数有一个简单的递归解决方案(封闭形式最终是加利福尼亚语公式,这是@ codeka的答案已经提到)。 我不知道结构不同的二叉搜索树(简称BST)的数量将如何与“纯”二进制树的数量不同 - 除非“如果通过”考虑树节点值“,则表示每个节点可能是例如任何与BST条件兼容的数字,那么不同(但不是全部结构不同! - )BST的数量是无限的。 我怀疑你的意思是说,所以,请你澄清一下你的意思,一个例子! ...
-
问题是我在二进制搜索中将最小音量设置为高,但我应该使用最大音量。 第二个问题是我没有将最大半径^ 3传递给二进制搜索函数。 感谢帮助 The problem was I was setting minimal volume as high in the binary search, but I should use the maximal volume. The second problem was I was not passing maximal radius ^ 3 to the binary sea ...
-
二进制搜索与向量(Binary search with a vector)[2022-07-02]
auto s=v.begin(); auto e=v.end(); auto L = std::distance(s, e); while (L > 0) { auto half = L/2; auto mid = s; std::advance(mid, half); if (*mid < target) { s = mid; ++s; L = L - half - 1; } else L = half; } ... -
二进制搜索的逻辑(Logic of binary search)[2023-05-28]
二分查找仅适用于有序列表。 但是你不要从std::cin得到你的列表,因此你的二分搜索会得到错误的结果。 要解决这个问题,您必须将输入限制为预先排序的列表,或者在进行二分搜索之前,您必须首先排序列表。 The binary search only works for ordered lists. But you don't order the list you get from std::cin, therefore you get wrong results from your binary search ... -
Elixir二进制搜索(Elixir binary search)[2022-02-01]
免责声明:不要在生产中使用它,这比简单的线性搜索慢,因为链接列表不允许在固定时间内随机访问。 这篇文章仅仅是关于模式匹配方面。 理论上讲,你可以使用警卫子句,但如果你做得过分,它们可能会让事情变得更糟。 假设你从这样的实现开始: defmodule MyEnum do def binsearch(collection, key) do binsearch(collection, key, 1, length(collection)) end defp binsearch(collect ... -
了解二进制搜索错误(Understanding Binary Search bug)[2023-12-01]
让我们说byte low = 50, high = 100 。 表达式low + high将首先将两者都提升为int ,然后添加它们,从而得到值150 (int) 。 在版本1中,然后将150 (int)为byte ,即值-106 (byte) 。 溢出 。 与+相同, /运算符将两侧提升为int ,因此它变为-106 (int) ,除以2时为-53 (int) 。 最后再次转换为byte ,最后以-53 (byte)结束。 在版本2中,您将150 (int)除以2 ,并且因为两边都已经是int值,所以不进 ... -
了解二进制搜索树(Understanding the Binary Search Tree)[2022-03-29]
保留记忆。 在struct node {...} ,你现在基本上有一个关于节点应该是什么样子的蓝图。 然后你去 城市 记忆规划办公室,并告诉他们“我正在制作一个新的节点,我能在某个地方保留24 平方米的 字节吗?” 他们告诉你“当然,这里应该是一个好地方,我们保证不会把它交给其他任何人。只要确保不要乱用24 平方米 以外的任何东西,否则会发生不好的事情”。 所以你拿走你的temp地址,去那里开始布置东西: key进入这 8 平方米的 字节,让我们清理这些 8和其他 8个来自之前可能存在的所有残骸...... ...