Solr日志没有被直接推送到kafka,Solr无法连接到ZK(Solr logs not being pushed directly to kafka, Solr cannot connect to ZK)
我正在尝试使用log4j将solr中的日志直接发送到kafka。 虽然日志将打印到stdout,但没有数据到达kafka。 我可以使用命令行生成器将数据推送到kafka。
我得到的警告和错误:
WARN - 2015-01-19 12:09:25.545; org.apache.solr.cloud.Overseer$ClusterStateUpdater; Solr cannot talk to ZK, exiting Overseer main queue loop INFO - 2015-01-19 12:09:25.552; org.apache.solr.cloud.Overseer$ClusterStateUpdater; Overseer Loop exiting : 10.254.120.50:8900_solr WARN - 2015-01-19 12:09:25.554; org.apache.solr.common.cloud.ZkStateReader$2; ZooKeeper watch triggered, but Solr cannot talk to ZK ERROR - 2015-01-19 12:09:25.560; org.apache.solr.cloud.Overseer$ClusterStateUpdater; could not read the data org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /overseer_elect/leader
我的Log4j.Properties文件:
solr.log=/home/solradmin/solr/latest/logs/ log4j.rootLogger=INFO, file, KAFKA log4j.logger.KAFKA=INFO, file log4j.logger.solr=INFO, KAFKA log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout log4j.appender.KAFKA.layout.ConversionPattern=%-5p: %c - %m%n log4j.appender.KAFKA.BrokerList=localhost:9092 log4j.appender.KAFKA.Topic=herpderp log4j.appender.file=org.apache.log4j.RollingFileAppender log4j.appender.file.MaxFileSize=100MB log4j.appender.file.MaxBackupIndex=9 log4j.appender.file.File=${solr.log}/solr.log log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%-5p - %d{yyyy-MM-dd HH:mm:ss.SSS}; %C; %m\n log4j.logger.org.apache.solr=DEBUG log4j.logger.org.apache.zookeeper=WARN log4j.logger.org.apache.hadoop=WARN
log4j文档未将kafka列为受支持的appender。 然而, kafka文档显示 log4j易于配置。
log4j需要某种插件来支持kafka吗?
我使用以下来源尝试了不同的配置: http : //kafka.apache.org/07/quickstart.html和KafkLog4JAppender没有将应用程序日志推送到kafka主题 。
I am trying to send logs from solr directly to kafka using log4j. While the logs will be printed to stdout, no data arrives in kafka. I am able to push data to kafka with the command line producer.
The warning and error I am getting:
WARN - 2015-01-19 12:09:25.545; org.apache.solr.cloud.Overseer$ClusterStateUpdater; Solr cannot talk to ZK, exiting Overseer main queue loop INFO - 2015-01-19 12:09:25.552; org.apache.solr.cloud.Overseer$ClusterStateUpdater; Overseer Loop exiting : 10.254.120.50:8900_solr WARN - 2015-01-19 12:09:25.554; org.apache.solr.common.cloud.ZkStateReader$2; ZooKeeper watch triggered, but Solr cannot talk to ZK ERROR - 2015-01-19 12:09:25.560; org.apache.solr.cloud.Overseer$ClusterStateUpdater; could not read the data org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /overseer_elect/leader
My Log4j.Properties file:
solr.log=/home/solradmin/solr/latest/logs/ log4j.rootLogger=INFO, file, KAFKA log4j.logger.KAFKA=INFO, file log4j.logger.solr=INFO, KAFKA log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout log4j.appender.KAFKA.layout.ConversionPattern=%-5p: %c - %m%n log4j.appender.KAFKA.BrokerList=localhost:9092 log4j.appender.KAFKA.Topic=herpderp log4j.appender.file=org.apache.log4j.RollingFileAppender log4j.appender.file.MaxFileSize=100MB log4j.appender.file.MaxBackupIndex=9 log4j.appender.file.File=${solr.log}/solr.log log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%-5p - %d{yyyy-MM-dd HH:mm:ss.SSS}; %C; %m\n log4j.logger.org.apache.solr=DEBUG log4j.logger.org.apache.zookeeper=WARN log4j.logger.org.apache.hadoop=WARN
The log4j documentation does not list kafka as a supported appender. Yet the kafka documentation shows that log4j is easy to configure.
Does log4j require some sort of plugin to support kafka?
I tried different configurations using the following sources: http://kafka.apache.org/07/quickstart.html and KafkLog4JAppender not pushing application logs to kafka topic .
原文:https://stackoverflow.com/questions/31572380
最满意答案
我设法运行它包括所需的库。 我可以更改pom并使用依赖项构建最终的jar文件,但我更喜欢不更改项目。
用
mvn clean install -DskipTests=true -Dmaven.javadoc.skip=true
构建它之后mvn clean install -DskipTests=true -Dmaven.javadoc.skip=true
我用设置java classpath运行它:
java -cp tez-dist/target/tez-0.7.0/lib/*:tez-dist/target/tez-0.7.0/* org.apache.tez.examples.OrderedWordCount in.txt out
I managed to run it with including needed libraries. I could changing pom and build the final jar file with the dependencies, but I preferred not to change the project.
After building it with
mvn clean install -DskipTests=true -Dmaven.javadoc.skip=true
I ran it with setting java classpath:
java -cp tez-dist/target/tez-0.7.0/lib/*:tez-dist/target/tez-0.7.0/* org.apache.tez.examples.OrderedWordCount in.txt out
相关问答
更多-
即时的。 从性能基准角度来看,您只关心在查询完成之前需要多长时间(人工时间),并且您可以查看结果,而不是内部应用程序正在启动的进程数量。 请注意,我会非常小心地进行性能基准测试,因为Spark和Hive都有大量的可调配置旋钮,这些旋钮极大地影响了性能。 请参阅此处的几个示例,以便通过矢量化,数据格式选择,数据分段和数据排序来改变Hive的性能。 “普遍共识”是Spark在Tez上比Hive更快,但是Hive可以处理大量不适合内存的数据集。 (因为我很懒,所以我不会引用一个消息来源,做一些Google搜索) ...
-
当hive.execution.engine为其tez值时,NoSuchMethodError(NoSuchMethodError when hive.execution.engine value its tez)[2024-02-11]
我解决了自己....我刚刚将tez的版本从0.4.1升级到0.5.2。 我刚才听说tez 0.4.1在与hive集成时遇到了一些问题。 i solved myself.... i just upgraded the version of tez from 0.4.1 to 0.5.2. i just heard that tez 0.4.1 had some problems when integrating with hive. -
我能够完成这件事。 早些时候,当我这样做时,它只是地图工作。 现在,我已经更改了一些查询以使用reducer(添加分发)。 然后,如果我说“减速器数量= 1”,它的工作原理。 但它不适用于其他适用于仅限地图工作的参数 I was able to get this done. Earlier, when I was doing this, it was map only job. Now, I have changed the query a bit to use reducer also(Added dis ...
-
无法在Apache Tez上运行(Unable to run on Apache Tez)[2023-02-11]
由于Tez是一个孵化器项目,我们需要下载src并使用maven构建。 Tez 0.8.1 alpha版本的详细步骤可参考以下链接。 https://acadgild.com/blog/integrating-apache-tez-with-hadoop/ 虽然构建它将在tez-ui失败。为了不面对这个问题,在你的linux机器上安装git,node js和npm,然后开始构建,这将允许你成功构建。 除了上面提到的步骤之外,还需要在〜/ .bashrc文件中添加参数 export TEZ_CONF_DIR= ... -
在tez-site.xml文件中,配置tez.lib.uris属性以包含所需jar的路径。 In the tez-site.xml file, configure the tez.lib.uris property to include path to required jar.
-
在hive控制台日志中查找applicationId。 然后可以通过以下方式获取日志: $ bin / yarn logs -applicationId> app_logs.txt Look for an applicationId in the hive console log. Logs can then be obtained by: $bin/yarn logs -applicationId > app_logs.txt
-
我设法运行它包括所需的库。 我可以更改pom并使用依赖项构建最终的jar文件,但我更喜欢不更改项目。 用mvn clean install -DskipTests=true -Dmaven.javadoc.skip=true构建它之后mvn clean install -DskipTests=true -Dmaven.javadoc.skip=true 我用设置java classpath运行它: java -cp tez-dist/target/tez-0.7.0/lib/*:tez-dist/targe ...
-
似乎是这个版本(0.8.0)连接到资源管理器的问题。 我编译并集成了之前的稳定版本(0.7.0),现在一切都很好。 我希望他们能解决问题。 It seems that it was the problem of this version (0.8.0) connecting to the resource manager. I compiled and integrated the previous stable release (0.7.0) and everything is good to go no ...
-
是的,很久以前就被删除了。 Yes, it was removed quite some time ago.
-
Apache Tez构建失败(Apache Tez build fails)[2023-08-14]
根据错误,您似乎需要安装git: 在PATH中未安装ENOGIT git 此外,这可能对其他错误有所帮助: https : //cwiki.apache.org/confluence/display/TEZ/Build+errors+and+solutions Based on the error, it seems like you need git installed: ENOGIT git is not installed or not in the PATH Also, this might be ...