转载

Logback最佳实践和使用指导

一、概述

1、采用dev(开发环境)、test(测试环境)、pre release(预发布、准生产环境)、production(生产环境)等不同的日志配置,根据系统变量、环境变量等自动识别。

2、特殊的记录,需要大批量写入日志文件,应该采用异步线程写文件。

二、日志输出(Appender)分类

分为5个 基础类 : 

FILE_EXCEPTION (异常日志,包括ERROR和WARN)

FILE_APP (应用日志,包括当前应用package下面的日志和DEBUG级别以上的其他日志)

FILE_INFO (普通信息日志)

FILE_DEBUG (调试日志)

FILE_TRACE(追踪日志)

SYSOUT(控制台输出,可以包括以上所有日志)

扩展类: 包括异步输出的日志,或者特殊业务日志。

举例说明 (FILE_EXCEPTION、FILE_APP、FILE_INFO 等的区别):

假如 当前应用的 Main Package 为 cn.zollty.demo

ROOT_LEVEL为 trace,应用日志 LEVEL 为 debug

有以下日志打印:

Logger loggerA = LoggerFactory.getLogger(cn.zollty.demo.Tests. class );

loggerA.trace( "--------" );

loggerA.debug( "--------" );

loggerA.info( "--------" );

loggerA.warn( "--------" );

loggerA.error( "--------" );

Logger loggerB = LoggerFactory.getLogger(org.apache.kafka.Consumer. class );

loggerB.trace( "--------" );

loggerB.debug( "--------" );

loggerB.info( "--------" );

loggerB.warn( "--------" );

loggerB.error( "--------" );

那么,异常日志(FILE_EXCEPTION)输出的为:

loggerA.warn("--------");

loggerA.error("--------");

loggerB.warn("--------");

loggerB.error("--------");

控制台(SYSOUT)输出的为;

loggerA.debug("--------");

loggerA.info("--------");

loggerA.warn("--------");

loggerA.error("--------");

loggerB.trace("--------");

loggerB.debug("--------");

loggerB.info("--------");

loggerB.warn("--------");

loggerB.error("--------");

应用日志(FILE_APP)输出的为:

loggerA.debug("--------");

loggerA.info("--------");

loggerA.warn("--------");

loggerA.error("--------");

loggerB.info("--------");

loggerB.warn("--------");

loggerB.error("--------");

三、日志归档和保存期限

1、定期对日志文件进行归档、压缩 。

2、保存期限越久越好,但是本机保存的日志最大总容量(当前日志文件+归档压缩后的日志文件 size总和)规定如下:

  • 异常日志(error和warn)最多保存 10G(生产、准生产环境至少要保存30天)

  • app日志最多保存 10G(生产、准生产环境至少要保存7天)

  • trace日志最多保存 1G(仅供开发、测试用,生产环境一般不用)

  • debug、info日志最多保存 5G(一般不用,用app日志就够了)

  • 其他独立的输出日志视具体需求而定,但原则上最多保存 10G

对于超过本地保存容量的日志,根据业务需求决定是否删除或者迁移到其他地方保存。

四、各环境默认日志定义

开发环境

1)默认日志级别定义为:

app包为TRACE级别。日志的ROOT Level为DEBUG级别。

2)启用 System.out 控制台输出日志;

3)启用error.log为错误和警告日志、app.log为应用日志(包括app包下的日志和其他INFO级别以上的日志)。

测试环境

1)默认日志级别定义为:

app包为DEBUG级别。日志的ROOT Level为DEBUG级别。

2)禁用 System.out 控制台输出日志;

3)启用error.log为错误和警告日志、app.log为应用日志(包括app包下的日志和其他INFO级别以上的日志)。

生产环境

1)默认日志级别定义为:

app包为DEBUG级别。日志的ROOT Level为INFO级别。

2)禁用 System.out 控制台输出日志;

3)启用error.log为错误和警告日志、app.log为应用日志(包括app包下的日志和其他INFO级别以上的日志)。

四、根据环境自动选择日志配置

关键点1:使用logback的环境变量定义和读取功能

例如下面的各种环境变量定义:

<!-- 部署的环境类型:dev、test、product -->

< property name = "DEPLOY_ENV" value = "${deploy.env:-dev}" />

<!-- 日志路径,这里是相对路径,web项目eclipse下会输出到当前目录./logs/下,如果部署到linux上的tomcat下,会输出到tomcat/logs/目录 下 -->

< property name = "LOG_HOME" value = "${catalina.base:-.}/logs" />

<!-- 日志文件大小,超过这个大小将被压缩 -->

< property name = "LOG_MAX_SIZE" value = "100MB" />

<!-- 日志输出格式 -->

< property name = "LOG_COMMON_PATTERN" value = "%d{HH:mm:ss.SSS} [%thread] [%level] %logger - %msg%n" />

< property name = "LOG_DEV_PATTERN" value = "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{48}:%line - %msg%n" />

<!-- 主日志级别 -->

< property name = "ROOT_LEVEL" value = "${log.root.level:-DEBUG}" />

<!-- APP 日志级别 -->

< property name = "APP_LEVEL" value = "${log.app.level:-TRACE}" />

<!-- APP Package 前缀: cn.zollty.demo -->

< property name = "APP_PACKAGE" value = "cn.zollty.demo" />

其中 ${deploy.env:-dev} 代表的意思是,如果环境变量中没有 deploy.env,则使用默认值dev。

一个小技巧:可以自定义类似下面这个类,在logback初始化之前,先设置变量的值:

< statusListener class = "cn.zollty.commons.logbackext.InitConfigOnConsoleStatusListener" />

这个类继承自ch.qos.logback.core.status.OnConsoleStatusListener。

关键点2:使用logback的 if-then 条件语法

< root level = "${ROOT_LEVEL}" >

  <!-- Required: exception log -->

  < appender-ref ref = "FILE_EXCEPTION" />

  <!-- Required: app log  -->

  < appender-ref ref = "FILE_APP" />

  < if condition = 'p("DEPLOY_ENV").contains("dev")' >

   < then >

   < appender-ref ref = "STDOUT" />

   </ then >

  </ if >

</ root >

参考配置:

logback.xml:

<? xml version = "1.0" encoding = "UTF-8" ?>

< configuration scan = "true" scanPeriod = "60 seconds" debug = "false" >

  < statusListener class = "cn.zollty.commons.logbackext.InitConfigOnConsoleStatusListener" />

  <!-- 部署的环境类型:dev、test、product -->

  < property name = "DEPLOY_ENV" value = "${deploy.env:-dev}" />

  <!-- 日志路径,这里是相对路径,web项目eclipse下会输出到当前目录./logs/下,如果部署到linux上的tomcat下,会输出到tomcat/logs/目录 下 -->

  < property name = "LOG_HOME" value = "${catalina.base:-.}/logs" />

 

  <!-- 日志文件大小,超过这个大小将被压缩 -->

  < property name = "LOG_MAX_SIZE" value = "100MB" />

  <!-- 日志输出格式 -->

  < property name = "LOG_COMMON_PATTERN" value = "%d{HH:mm:ss.SSS} [%thread] [%level] %logger - %msg%n" />

  < property name = "LOG_DEV_PATTERN" value = "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{48}:%line - %msg%n" />

  <!-- 主日志级别 -->

  < property name = "ROOT_LEVEL" value = "${log.root.level:-DEBUG}" />

  <!-- APP 日志级别 -->

  < property name = "APP_LEVEL" value = "${log.app.level:-TRACE}" />

  <!-- APP Package 前缀: cn.zollty.demo -->

  < property name = "APP_PACKAGE" value = "cn.zollty.demo" />

 

  < include resource = "includedConfig.xml" />

 

  < appender name = "STDOUT" class = "ch.qos.logback.core.ConsoleAppender" >

   < encoder >

    < pattern >${LOG_DEV_PATTERN}</ pattern >

   </ encoder >

  </ appender >

 

  < appender name = "FILTER-DATA" class = "ch.qos.logback.core.rolling.RollingFileAppender" >

   < file >${LOG_HOME}/filter.log</ file >

   < rollingPolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >

    < fileNamePattern >${LOG_HOME}/filter/filter-%d{yyyy-MM-dd}-%i.log.zip</ fileNamePattern >

    < maxHistory >90</ maxHistory >

    < TimeBasedFileNamingAndTriggeringPolicy class = "ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP" >

     < MaxFileSize >100MB</ MaxFileSize >

    </ TimeBasedFileNamingAndTriggeringPolicy >

   </ rollingPolicy >

   < encoder >

    < pattern >${LOG_COMMON_PATTERN}</ pattern >

   </ encoder >

  </ appender >

 

  < appender name = "ASYNC1" class = "ch.qos.logback.classic.AsyncAppender" >

   < appender-ref ref = "FILTER-DATA" />

  </ appender >

 

  < include resource = "special_log_level.xml" />

  < logger name = "${APP_PACKAGE}" level = "${APP_LEVEL}" />

 

  < logger name = "FILTER-LOGGER" level = "${APP_LEVEL}" additivity = "false" >

   < appender-ref ref = "ASYNC1" />

  </ logger >

  < root level = "${ROOT_LEVEL}" >

   <!-- Required: exception log -->

   < appender-ref ref = "FILE_EXCEPTION" />

   <!-- Required: app log  -->

   < appender-ref ref = "FILE_APP" />

  

   <!-- Optional: show all debug or trace info -->

   <!-- <appender-ref ref="FILE_DEBUG"/> -->

   <!-- <appender-ref ref="FILE_TRACE"/> -->

  

   < if condition = 'p("DEPLOY_ENV").contains("dev")' >

    < then >

     < appender-ref ref = "STDOUT" />

    </ then >

   </ if >

  

  </ root >

</ configuration >

includedConfig.xml

<? xml version = "1.0" encoding = "UTF-8" ?>

< included >

  <!-- WARN and ERROR -->

  < appender name = "FILE_EXCEPTION" class = "ch.qos.logback.core.rolling.RollingFileAppender" >

   < filter class = "ch.qos.logback.classic.filter.ThresholdFilter" >

    < level >WARN</ level >

   </ filter >

   < file >${LOG_HOME}/error.log</ file >

   < rollingPolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >

    <!-- rollover daily -->

    < fileNamePattern >${LOG_HOME}/error/error-%d{yyyy-MM-dd}-%i.log.zip</ fileNamePattern >

    < maxHistory >600</ maxHistory >

    < TimeBasedFileNamingAndTriggeringPolicy class = "ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP" >

     <!-- or whenever the file size reaches 100MB -->

     < MaxFileSize >${LOG_MAX_SIZE}</ MaxFileSize >

    </ TimeBasedFileNamingAndTriggeringPolicy >

   </ rollingPolicy >

   < encoder >

    < pattern >${LOG_COMMON_PATTERN}</ pattern >

   </ encoder >

  </ appender >

  <!-- INFO or Greater -->

  < appender name = "FILE_INFO" class = "ch.qos.logback.core.rolling.RollingFileAppender" >

   < filter class = "ch.qos.logback.classic.filter.ThresholdFilter" >

    < level >INFO</ level >

   </ filter >

   < file >${LOG_HOME}/info.log</ file >

   < rollingPolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >

    < fileNamePattern >${LOG_HOME}/info/info-%d{yyyy-MM-dd}-%i.log.zip</ fileNamePattern >

    < maxHistory >50</ maxHistory >

    < TimeBasedFileNamingAndTriggeringPolicy class = "ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP" >

     < MaxFileSize >${LOG_MAX_SIZE}</ MaxFileSize >

    </ TimeBasedFileNamingAndTriggeringPolicy >

   </ rollingPolicy >

   < encoder >

    < pattern >${LOG_COMMON_PATTERN}</ pattern >

   </ encoder >

  </ appender >

  <!-- DEBUG or Greater-->

  < appender name = "FILE_DEBUG" class = "ch.qos.logback.core.rolling.RollingFileAppender" >

   < filter class = "ch.qos.logback.classic.filter.ThresholdFilter" >

    < level >DEBUG</ level >

   </ filter >

   < file >${LOG_HOME}/debug.log</ file >

   < rollingPolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >

    < fileNamePattern >${LOG_HOME}/debug/debug-%d{yyyy-MM-dd}-%i.log.zip</ fileNamePattern >

    < maxHistory >50</ maxHistory >

    < TimeBasedFileNamingAndTriggeringPolicy class = "ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP" >

     < MaxFileSize >${LOG_MAX_SIZE}</ MaxFileSize >

    </ TimeBasedFileNamingAndTriggeringPolicy >

   </ rollingPolicy >

   < encoder >

    < pattern >${LOG_COMMON_PATTERN}</ pattern >

   </ encoder >

  </ appender >

  <!-- TRACE and ALL -->

  < appender name = "FILE_TRACE" class = "ch.qos.logback.core.rolling.RollingFileAppender" >

   < file >${LOG_HOME}/trace.log</ file >

   < rollingPolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >

    < fileNamePattern >${LOG_HOME}/trace/trace-%d{yyyy-MM-dd}-%i.log.zip</ fileNamePattern >

    < maxHistory >10</ maxHistory >

    < TimeBasedFileNamingAndTriggeringPolicy class = "ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP" >

     < MaxFileSize >${LOG_MAX_SIZE}</ MaxFileSize >

    </ TimeBasedFileNamingAndTriggeringPolicy >

   </ rollingPolicy >

   < encoder >

    < pattern >${LOG_COMMON_PATTERN}</ pattern >

   </ encoder >

  </ appender >

 

  <!-- (INFO or Greater) or logname prefix = ${APP_PACKAGE} -->

  < appender name = "FILE_APP" class = "ch.qos.logback.core.rolling.RollingFileAppender" >

   < filter class = "cn.zollty.demo.common.PackageOrThresholdFilter" >

    < level >INFO</ level >

    < prefix >${APP_PACKAGE}</ prefix >

   </ filter >

   < file >${LOG_HOME}/app.log</ file >

   < rollingPolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >

    < fileNamePattern >${LOG_HOME}/app/app-%d{yyyy-MM-dd}-%i.log.zip</ fileNamePattern >

    < maxHistory >600</ maxHistory >

    < TimeBasedFileNamingAndTriggeringPolicy class = "ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP" >

     < MaxFileSize >${LOG_MAX_SIZE}</ MaxFileSize >

    </ TimeBasedFileNamingAndTriggeringPolicy >

   </ rollingPolicy >

   < encoder >

    < pattern >${LOG_COMMON_PATTERN}</ pattern >

   </ encoder >

  </ appender >

</ included >

special_log_level.xml

<? xml version = "1.0" encoding = "UTF-8" ?>

< included >

  < logger name = "org.apache.zookeeper.ClientCnxn" level = "ERROR" />

  < logger name = "org.apache.kafka.clients.consumer.internals.ConsumerCoordinator" level = "INFO" />

  < logger name = "kafka.producer.BrokerPartitionInfo" level = "INFO" />

  < logger name = "kafka.producer.async.ProducerSendThread" level = "INFO" />

  < logger name = "kafka.producer.async.DefaultEventHandler" level = "INFO" />

  < logger name = "org.apache.kafka.common.metrics.Metrics" level = "INFO" />

  < logger name = "org.apache.kafka.clients.Metadata" level = "INFO" />

 

  < logger name = "org.apache.kafka.clients.consumer.internals.AbstractCoordinator" level = "INFO" />

  < logger name = "org.apache.kafka.clients.consumer.internals.Fetcher" level = "INFO" />

  < logger name = "org.apache.kafka.clients.NetworkClient" level = "INFO" />

 

</ included >

也可以把变量定义到properties文件中,本地就放在

src/resources/conf/logback_val.properties

服务器上放在

${catalina.base}/conf/logback_val.properties

配置如下:

<? xml version = "1.0" encoding = "UTF-8" ?>

< configuration scan = "true" scanPeriod = "60 seconds" debug = "false" >

  < if condition = 'isDefined("catalina.base")' >

   < then >

    < property file = "${catalina.base}/conf/logback_val.properties" />

   </ then >

   < else >

    < property resource = "./conf/logback_val.properties" />

   </ else >

  </ if >

 

  ...

logback_val.properties:

# deploy environment : dev, test, pre, prod

deploy.env=dev

# the root log level

log.root.level=DEBUG

# the app log level

log.app.level=TRACE

# to use Kafka LogStash value= "yes" or "no"

log.use.logstash=yes

# LogStash Kafka URL

log.logstash.kafka=172.16.1.164:9092,172.16.1.165:9092,172.16.1.166:9092

同理,可以根据  dev开发环境、test测试环境、prod生产环境来读取不同的配置。

<? xml version = "1.0" encoding = "UTF-8" ?>

< configuration scan = "true" scanPeriod = "60 seconds" debug = "false" >

  < property resource = "./conf/logback-${deploy.env}.properties" />

  ...

参考资料:

http://logback.qos.ch/manual/configuration.html#conditional

原文  https://blog.csdn.net/zollty/article/details/86143193
正文到此结束
Loading...