背景:新功能开发测试完成后,准备发布上线,当发布完第三台机器时,监控显示其中一台机器CPU突然飙升到300%,Dubbo活动线程数直接飙到1000+,不得不停止发布,立马回滚出问题的机器,回滚之后恢复正常;继续观察另外两台已经发布的机器,最终,无一幸免,只能全部回滚了。
下面是我的故障排查过程:
监控日志分析
首先查看故障时间点的应用日志,发现大量方法耗时较久,其中filterMission方法尤为显著,耗时长达30S+。说明下,filterMission是当前服务中QPS较高的接口(日均调用量2个亿),所以导致故障的可能性也较高。于是重新review了一遍filterMission的实现,其中并无复杂的计算逻辑,没有大量数据的处理逻辑,也不存在锁的争用,本次需求更是不涉及filterMission的改造,排除filterMission导致故障发生。
从日志中也可以发现,大量请求发生超时,这些都只能说明系统负载过重,并不能定位问题的症结所在。
Code Review
从应用日志找不到原因所在,只能再做一次code review了。首先检查系统中是否存在同步代码逻辑的使用,主要是为了排除发生死锁的可能;检查具有复杂运算逻辑的代码;同时,将本次修改的代码和上一版本进行比对,也没找出什么问题。(事实证明,Review不够仔细)
线程Dump分析
到此,从日志中暂时也分析不出问题,盲目看代码也无法具体定位问题了,现在只能重新发布一台机器,出现问题时让运维将应用程序的线程堆栈dump出来,分析jstack文件。开始分析dump文件前,先巩固下基础吧。
线程状态
图中各状态说明:
New : 新建状态,当线程对象创建时存在的状态;
Runnable :ready-to-run,可运行状态,调用thread.start()后,线程变成为Runnable状态,还需要获得CPU才能运行;
Running :正在运行,当调用Thread.yield()或执行完时间片,CPU会重新调度; 注意 :Thread.yield()调用之后,线程会释放CPU,但是CPU重新调度可能让线程重新获得时间片。
Waiting :调用thread.join()、object.wait()和LockSupport.park()线程都会进入该状态,表明线程正处于等待某个资源或条件发生来唤醒自己;thread.join()、object.wait()需要Object的notify()/notifyAll()或发生中断来唤醒,LockSupport.park()需要LockSupport.unpark()来唤醒,这些方法使线程进入Runnable状态,参与CPU调度。
thread.join():作用是等待线程thread终止,只有等thread执行完成后,主线程才会继续向下执行;从join()实现可知,主线程调用thread.join()之后,只有thread.isAlive()返回true,才会调用object.wait()使主线程进入等待状态,也就是说,thread.start()未被调用,或thread已经结束,object.wait()都不会被调用。也就是说,必须先启动线程thread,调用thread.join()才会生效;若主线程在waiting状态被唤醒,会再次判断thread.isAlive(),若为true,继续调用object.wait()使进入waiting状态,直到thread终止,thread.isAlive()返回false。
object.wait():作用是使线程进入等待状态,只有线程持有对象上的锁,才能调用该对象的wait(),线程进入等待状态后会释放其持有的该对象上的锁,但会仍然持有其它对象的锁。若其他线程想要调用notify()、notifyAll()唤醒该线程,也需要持有该对象的锁。
LockSupport.park():挂起当前线程,不参与线程调度,除非调用LockSupport.unpark()重新参与调度。
Timed_Waiting :调用Thread.sleep(long)、LockSupport.parkNanos(long)、thread.join(long)或obj.wait(long)等都会使线程进入该状态,与Waiting的区别在于Timed_Waiting的等待有时间限制;
Thread.sleep():让当前线程停止运行指定的毫秒数,该线程不会释放其持有的锁等资源。
Blocked :指线程正在等待获取锁,当线程进入synchronized保护的代码块或方法时,没有获取到锁,则会进入该状态;或者线程正在等待I/O,也会进入该状态。 注意 ,java中Lock对象不会使线程进入该状态。
Dead :线程执行完毕,或者抛出了未捕获的异常之后,会进入dead状态,表示该线程结束。
上图中某些状态只是为了方便说明,实际并不存在,如running/sleeping,java中明确定义的线程状态值有如下几个:
NEW、RUNNABLE、BLOCKED、WAITING、TIMED_WAITING、TERMINATED
分析jstack日志
"DubboServerHandler-172.24.16.78:33180-thread-1220" #1700 daemon prio&#61;5 os_prio&#61;0 tid&#61;0x00007f3394988800 nid&#61;0x4aae waiting on condition [0x00007f32d75c0000] java.lang.Thread.State: WAITING (parking)at sun.misc.Unsafe.park(Native Method)- parking to wait for <0x00000000866090c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081)at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)
由日志可知道&#xff0c;线程“DubboServerHandler-172.24.16.78:33180-thread-1220”处于WAITING状态&#xff0c;主要原因是线程从线程池队列中取任务来执行&#xff0c;但线程池为空&#xff0c;最终调用了LockSupport.park使线程进入等待状态&#xff0c;需要等待队列非空的通知。
设想一下&#xff0c;什么时候会新创建新线程来处理请求&#xff1f;结合jdk线程池实现可知&#xff0c;当新请求到来时&#xff0c;若池中线程数量未达到corePoolSize&#xff0c;线程池就会直接新建线程来处理请求。
根据jstack日志&#xff0c;有195个dubbo线程从ScheduledThreadPoolExecutor中取任务导致处于WAITING状态&#xff0c;按理这些dubbo线程只负责处理客户端请求&#xff0c;不会处理调度任务&#xff0c;为什么会去调度任务线程中取任务呢&#xff1f;这里暂时抛出这个问题吧&#xff0c;我也不知道答案&#xff0c;希望有大神帮忙解答。
"DubboServerHandler-172.24.16.78:33180-thread-1489" #1636 daemon prio&#61;5 os_prio&#61;0 tid&#61;0x00007f33b0122800 nid&#61;0x48ec waiting on condition [0x00007f32db600000] java.lang.Thread.State: WAITING (parking)at sun.misc.Unsafe.park(Native Method)- parking to wait for <0x0000000089d717a8> (a java.util.concurrent.SynchronousQueue$TransferStack)at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)at java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924)at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)
这部分dubbo线程主要是因为从ThreadPoolExecutor取任务来执行时被挂起(309个线程)&#xff0c;这些线程正常处理完第一个请求后&#xff0c;就会回到线程池等待新的请求。由于这里使用newFixedThreadPool作为dubbo请求处理池&#xff0c;因此每个新请求默认都会创建新线程来处理&#xff0c;除非达到池的限定值。只有达到线程池最大线程数量&#xff0c;新的请求来临才会被加入任务队列&#xff0c;哪些阻塞在getTask()的线程才会得到复用。
"DubboServerHandler-172.24.16.78:33180-thread-236" #1727 daemon prio&#61;5 os_prio&#61;0 tid&#61;0x00007f336403b000 nid&#61;0x4b8b waiting for monitor entry [0x00007f32d58a4000] java.lang.Thread.State: BLOCKED (on object monitor)at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.checkRollover(RollingFileManager.java:149)- waiting to lock <0x0000000085057998> (a org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager)at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:88)at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:349)at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1993)at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1852)at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:179)at com.alibaba.dubbo.common.logger.slf4j.Slf4jLogger.info(Slf4jLogger.java:42)at com.alibaba.dubbo.common.logger.support.FailsafeLogger.info(FailsafeLogger.java:93)at com.dianwoba.universe.dubbo.filter.ResponseFilter$1.run(ResponseFilter.java:116)at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)
waiting for monitor entry &#xff1a;说明当前线程正处于EntryList队列中&#xff0c;等待获取监视器锁。
说明&#xff1a;Java中synchronized的同步语义主要基于Java对象头和monitor实现&#xff0c;每个monitor同一时间只能由一个线程拥有&#xff0c;其他想要获取该monitor只能等待&#xff0c;其中monitor具有两个队列&#xff1a;WaitSet 和 EntryList。当某个线程在拥有monitor时&#xff0c;调用了Object.wait()&#xff0c;则会释放monitor&#xff0c;进入WaitSet队列等待&#xff0c;此时线程状态为WAITING&#xff0c;WaitSet中的等待的状态是 “in Object.wait()”。当其他线程调用Object的notify()/notifyAll()唤醒该线程后&#xff0c;将会重新竞争monitor&#xff1b;当某个线程尝试进入synchronized代码块或方法时&#xff0c;获取monitor失败则会进入EntryList队列&#xff0c;此时线程状态为BLOCKED&#xff0c;EntryList中等待的状态为“waiting for monitor entry”。
根据jstack日志&#xff0c;有377个dubbo线程在等待锁定资源“0x0000000085057998”&#xff0c;从堆栈可知&#xff0c;这些线程都在竞争RollingRandomAccessFileManager的monitor&#xff0c;让我们看看那个线程拥有了该监视器&#xff0c;看日志&#xff1a;
"DubboServerHandler-172.24.16.78:33180-thread-429" #553 daemon prio&#61;5 os_prio&#61;0 tid&#61;0x00007f339c09f800 nid&#61;0x4467 waiting for monitor entry [0x00007f331f53a000] java.lang.Thread.State: BLOCKED (on object monitor)at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.checkRollover(RollingFileManager.java:149)- locked <0x0000000085057998> (a org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager)at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:88)at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)at org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:381)at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:376)at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:349)at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1993)at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1852)at org.apache.logging.slf4j.Log4jLogger.warn(Log4jLogger.java:239)at com.alibaba.dubbo.common.logger.slf4j.Slf4jLogger.warn(Slf4jLogger.java:54)at com.alibaba.dubbo.common.logger.support.FailsafeLogger.warn(FailsafeLogger.java:107)at com.alibaba.dubbo.rpc.filter.TimeoutFilter.invoke(TimeoutFilter.java:48)at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)at com.alibaba.dubbo.rpc.protocol.dubbo.filter.TraceFilter.invoke(TraceFilter.java:78)at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)at com.alibaba.dubbo.rpc.filter.ContextFilter.invoke(ContextFilter.java:60)at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)at com.alibaba.dubbo.rpc.filter.GenericFilter.invoke(GenericFilter.java:112)at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)at com.alibaba.dubbo.rpc.filter.ClassLoaderFilter.invoke(ClassLoaderFilter.java:38)at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)at com.alibaba.dubbo.rpc.filter.EchoFilter.invoke(EchoFilter.java:38)at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)at com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:108)at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.handleRequest(HeaderExchangeHandler.java:84)at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:170)at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:52)at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:82)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)
看到该线程的堆栈就一脸懵b了&#xff0c;它已经锁定了资源“0x0000000085057998”&#xff0c;但仍然处于BLOCKED状态&#xff0c;是不是有死锁的味道&#xff1f;但是这里不是死锁&#xff0c;最可能的原因是&#xff1a;
从这部分日志可知&#xff0c;大部分线程阻塞在打印监控日志的过程中&#xff0c;所以很多请求出现超时。主要原因可能是CPU占用率高&#xff0c;持有锁的线程处理过程非常慢&#xff0c;导致越来越多的线程在锁竞争中被阻塞&#xff0c;整体性能下降。
到此&#xff0c;仍然没有找到问题的原因&#xff0c;再一次观察资源占用情况&#xff0c;发现当出现问题时&#xff0c;内存占用持续增长&#xff0c;且无下降痕迹&#xff0c;然后找运维dump了一份GC日志&#xff0c;发现JVM一直在做fullGC&#xff0c;而每次GC之后内存基本没变化&#xff0c;说明程序中发生了内存泄漏。最后定位到发生内存泄漏的地方是一个分页查询接口&#xff0c;SQL语句中漏掉了limit,offset&#xff0c;够初心大意了。
这尼玛一次性将整张表的数据查出来(300万)&#xff0c;还要对300万记录循环处理一遍&#xff0c;这内存不爆掉就怪了。正是因为该原因&#xff0c;导致内存长时间没有释放&#xff0c;JVM执行fullGC无法回收内存&#xff0c;导致持续做fullGC&#xff0c;CPU被耗尽&#xff0c;无法处理应用请求。
综上&#xff0c;罪魁祸首是发生了内存泄漏&#xff0c;JVM一直做fullGC&#xff0c;导致CPU突然飙升&#xff0c;Dubbo活动线程数增大&#xff0c;锁竞争严重&#xff0c;请求处理超时。根据以上分析&#xff0c;同时也暴露了另外一个不合理的问题&#xff1a;dubbo线程池数量配置过大&#xff0c;最大值为1500&#xff0c;也就是说最终线程池中会有1500个线程来处理用户请求&#xff0c;当并发量高时&#xff0c;活动线程数增加&#xff0c;CPU频繁进行上下文切换&#xff0c;系统的吞吐率并不会太高。这是一个优化点。
本文记录该过程&#xff0c;一方面是为了记录曾经踩过的坑&#xff0c;同时提高自己的故障分析和处理能力。当需要问题时&#xff0c;一定不要着急&#xff0c;学会分析问题&#xff0c;一步步找到问题所在。尤其是遇到线上问题时&#xff0c;由于无法调试&#xff0c;一定要在应用中做监控&#xff0c;当出现问题时&#xff0c;一定要结合日志来分析&#xff0c;业务中的问题结合业务日志&#xff0c;性能问题结合内存dump日志和线程dump日志等等。
欢迎指出本文有误的地方&#xff01;