首页 \ 问答 \ 如何覆盖特定类的log4j设置(How to Overwrite log4j Settings For A Specific Class)

如何覆盖特定类的log4j设置(How to Overwrite log4j Settings For A Specific Class)

使用Hadoop HBASE集群,我想覆盖log4j以将特定类org.apache.hadoop.hbase.tool.Canary的日志输出到控制台。

目前,Hbase app文件的log4j.properties如下所示:

hbase.root.logger=INFO,RFA,RFAE
hbase.log.dir=.
hbase.log.file=hbase.log

# Define the root logger to the system property "hbase.root.logger".
log4j.rootLogger=${hbase.root.logger}

# Logging Threshold
log4j.threshold=ALL

# Rolling File Appender properties
hbase.log.maxfilesize=128MB
hbase.log.maxbackupindex=10
hbase.log.layout=org.apache.log4j.PatternLayout
hbase.log.pattern=%d{ISO8601} %p %c: %m%n

#
# Daily Rolling File Appender
# Hacked to be the Rolling File Appender
# Rolling File Appender
log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}

log4j.appender.DRFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.DRFA.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.DRFA.layout=${hbase.log.layout}
log4j.appender.DRFA.layout.ConversionPattern=${hbase.log.pattern}
log4j.appender.DRFA.Append=true

# Rolling File Appender
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}

log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.RFA.layout=${hbase.log.layout}
log4j.appender.RFA.layout.ConversionPattern=${hbase.log.pattern}
log4j.appender.RFA.Append=true

#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# Error log appender, each log event will include hostname
#
hbase.error.log.file=hbase_error.log
log4j.appender.RFAE=org.apache.log4j.RollingFileAppender
log4j.appender.RFAE.File=${hbase.log.dir}/${hbase.error.log.file}
log4j.appender.RFAE.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFAE.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.RFAE.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAE.layout.ConversionPattern=%d{ISO8601} data-analytics1-data-namenode-dev-001 %p %c: %m%n

log4j.appender.RFAE.Threshold=ERROR
log4j.appender.RFAE.Append=true

# Custom Logging levels
org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress=DEBUG
log4j.logger.org.apache.zookeeper=WARN
#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
log4j.logger.org.apache.hadoop.hbase=INFO
# Make these two classes INFO-level. Make them DEBUG to see more zk debug.
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=WARN
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=WARN
# Snapshot Debugging
log4j.logger.org.apache.hadoop.hbase.regionserver.snapshot=DEBUG
#log4j.logger.org.apache.hadoop.dfs=DEBUG
# Set this class to log INFO only otherwise its OTT

# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)
#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG

# Uncomment the below if you want to remove logging of client region caching'
# and scan of .META. messages
# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO
# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO

请指教。 谢谢!


With a Hadoop HBASE cluster, I would like to overwrite log4j to output the log for a specific class org.apache.hadoop.hbase.tool.Canary to the console.

Currently the log4j.properties for Hbase app file looks like that:

hbase.root.logger=INFO,RFA,RFAE
hbase.log.dir=.
hbase.log.file=hbase.log

# Define the root logger to the system property "hbase.root.logger".
log4j.rootLogger=${hbase.root.logger}

# Logging Threshold
log4j.threshold=ALL

# Rolling File Appender properties
hbase.log.maxfilesize=128MB
hbase.log.maxbackupindex=10
hbase.log.layout=org.apache.log4j.PatternLayout
hbase.log.pattern=%d{ISO8601} %p %c: %m%n

#
# Daily Rolling File Appender
# Hacked to be the Rolling File Appender
# Rolling File Appender
log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}

log4j.appender.DRFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.DRFA.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.DRFA.layout=${hbase.log.layout}
log4j.appender.DRFA.layout.ConversionPattern=${hbase.log.pattern}
log4j.appender.DRFA.Append=true

# Rolling File Appender
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}

log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.RFA.layout=${hbase.log.layout}
log4j.appender.RFA.layout.ConversionPattern=${hbase.log.pattern}
log4j.appender.RFA.Append=true

#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# Error log appender, each log event will include hostname
#
hbase.error.log.file=hbase_error.log
log4j.appender.RFAE=org.apache.log4j.RollingFileAppender
log4j.appender.RFAE.File=${hbase.log.dir}/${hbase.error.log.file}
log4j.appender.RFAE.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFAE.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.RFAE.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAE.layout.ConversionPattern=%d{ISO8601} data-analytics1-data-namenode-dev-001 %p %c: %m%n

log4j.appender.RFAE.Threshold=ERROR
log4j.appender.RFAE.Append=true

# Custom Logging levels
org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress=DEBUG
log4j.logger.org.apache.zookeeper=WARN
#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
log4j.logger.org.apache.hadoop.hbase=INFO
# Make these two classes INFO-level. Make them DEBUG to see more zk debug.
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=WARN
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=WARN
# Snapshot Debugging
log4j.logger.org.apache.hadoop.hbase.regionserver.snapshot=DEBUG
#log4j.logger.org.apache.hadoop.dfs=DEBUG
# Set this class to log INFO only otherwise its OTT

# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)
#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG

# Uncomment the below if you want to remove logging of client region caching'
# and scan of .META. messages
# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO
# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO

Please advise. Thanks!


原文:https://stackoverflow.com/questions/38560379
更新时间:2023-10-28 08:10

最满意答案

事实证明,您可以使用JDBC字符串中的“instanceName”属性,如Microsoft Technet页面 “命名和多个SQL Server实例”一节中所述。 在我的情况下,有效的是以下用于虚拟服务器vvv和数据库实例名称iii的字符串:

“JDBC:SQLSERVER:// VVV;实例名= III”


it turns out that you can use the "instanceName" property within the JDBC string, as documented on the Microsoft Technet page in section "Named and Multiple SQL Server Instances". What worked in my case was the following string for virtual server vvv and database instance name iii:

"jdbc:sqlserver://vvv;instanceName=iii"

相关问答

更多

相关文章

更多

最新问答

更多
  • 您如何使用git diff文件,并将其应用于同一存储库的副本的本地分支?(How do you take a git diff file, and apply it to a local branch that is a copy of the same repository?)
  • 将长浮点值剪切为2个小数点并复制到字符数组(Cut Long Float Value to 2 decimal points and copy to Character Array)
  • OctoberCMS侧边栏不呈现(OctoberCMS Sidebar not rendering)
  • 页面加载后对象是否有资格进行垃圾回收?(Are objects eligible for garbage collection after the page loads?)
  • codeigniter中的语言不能按预期工作(language in codeigniter doesn' t work as expected)
  • 在计算机拍照在哪里进入
  • 使用cin.get()从c ++中的输入流中丢弃不需要的字符(Using cin.get() to discard unwanted characters from the input stream in c++)
  • No for循环将在for循环中运行。(No for loop will run inside for loop. Testing for primes)
  • 单页应用程序:页面重新加载(Single Page Application: page reload)
  • 在循环中选择具有相似模式的列名称(Selecting Column Name With Similar Pattern in a Loop)
  • System.StackOverflow错误(System.StackOverflow error)
  • KnockoutJS未在嵌套模板上应用beforeRemove和afterAdd(KnockoutJS not applying beforeRemove and afterAdd on nested templates)
  • 散列包括方法和/或嵌套属性(Hash include methods and/or nested attributes)
  • android - 如何避免使用Samsung RFS文件系统延迟/冻结?(android - how to avoid lag/freezes with Samsung RFS filesystem?)
  • TensorFlow:基于索引列表创建新张量(TensorFlow: Create a new tensor based on list of indices)
  • 企业安全培训的各项内容
  • 错误:RPC失败;(error: RPC failed; curl transfer closed with outstanding read data remaining)
  • C#类名中允许哪些字符?(What characters are allowed in C# class name?)
  • NumPy:将int64值存储在np.array中并使用dtype float64并将其转换回整数是否安全?(NumPy: Is it safe to store an int64 value in an np.array with dtype float64 and later convert it back to integer?)
  • 注销后如何隐藏导航portlet?(How to hide navigation portlet after logout?)
  • 将多个行和可变行移动到列(moving multiple and variable rows to columns)
  • 提交表单时忽略基础href,而不使用Javascript(ignore base href when submitting form, without using Javascript)
  • 对setOnInfoWindowClickListener的意图(Intent on setOnInfoWindowClickListener)
  • Angular $资源不会改变方法(Angular $resource doesn't change method)
  • 在Angular 5中不是一个函数(is not a function in Angular 5)
  • 如何配置Composite C1以将.m和桌面作为同一站点提供服务(How to configure Composite C1 to serve .m and desktop as the same site)
  • 不适用:悬停在悬停时:在元素之前[复制](Don't apply :hover when hovering on :before element [duplicate])
  • 常见的python rpc和cli接口(Common python rpc and cli interface)
  • Mysql DB单个字段匹配多个其他字段(Mysql DB single field matching to multiple other fields)
  • 产品页面上的Magento Up出售对齐问题(Magento Up sell alignment issue on the products page)