PrintFLSStatistics这个参考比较有用,因为CMS GC会有碎片问题,而随着碎片的越来越严重,GC性能会变差直到发生FullGC,而FullGC时STW通过会超过数秒,这对OLTP系统来说是致命的,通过这个参数可以在gc日志中输出free list方式分配内存后内存统计情况和碎片情况;
从CompactibleFreeListSpace的描述可知CMS使用free list分配内存
-- 源码摘自compactibleFreeListSpace.hpp:
// Concrete subclass of CompactibleSpace that implements
// a free list space, such as used in the concurrent mark sweep generation.
class CompactibleFreeListSpace: public CompactibleSpace {
friend class VMStructs;
friend class ConcurrentMarkSweepGeneration;
... ...
}
PrintFLSStatistics参数
再看compactibleFreeListSpace.cpp中的gc_prologue
和gc_epilogue
(prologue的中文意思是开场白,epilogue的中文意思是收场白,所以这两个方法可以理解为gc前和gc后),由两个方法的实现可知,如果JVM参数PrintFLSStatistics 不为0(负数也可以),那么每次GC前后都会调用reportFreeListStatistics()方法打印出free list的统计信息:
void
CompactibleFreeListSpace::gc_prologue() {
if (PrintFLSStatistics != 0) {
gclog_or_tty->print("Before GC:\n");
reportFreeListStatistics();
}
}
void
CompactibleFreeListSpace::gc_epilogue() {
// Print Space's stats
if (PrintFLSStatistics != 0) {
gclog_or_tty->print("After GC:\n");
reportFreeListStatistics();
}
}
再看reportFreeListStatistics的具体实现,分为两个部分来看:
- 第一部分是_dictionary->report_statistics()输出BinaryTreeDictionary统计信息;
- 第二部分是当PrintFLSStatistics >1时输出的附加log即IndexedFreeLists统计信息;
void CompactibleFreeListSpace::reportFreeListStatistics() const {
// 输出BinaryTreeDictionary统计信息
_dictionary->report_statistics();
if (PrintFLSStatistics > 1) {
// 如果PrintFLSStatistics >1,输出IndexedFreeLists统计信息
reportIndexedFreeListStatistics();
gclog_or_tty->print(" free=" SIZE_FORMAT " frag=%1.4f\n", total_size, flsFrag());
}
}
- 第一部分即BinaryTreeDictionary统计信息:
输出free list统计信息,gc日志中输出内容如下:
Total Free Space: 25165824
Max Chunk Size: 25165824
Number of Blocks: 1
Av. Block Size: 25165824
Tree Height: 1
- 第二部分即IndexedFreeLists统计信息:
如果JVM参数为PrintFLSStatistics 大于1,例如-XX:PrintFLSStatistics=2,那么还会输出IndexedFreeLists的统计信息,以及如下的gc日志,能够直观的看到碎片率,frag的值越大碎片化越严重,JVM的初始化时frag的值为0.0000,即没有任何碎片:
gclog_or_tty->print(" free=" SIZE_FORMAT " frag=%1.4f\n", total_size, flsFrag());
实际GC日志
为了查询碎片化率越来越严重的GC日志,笔者基于kafka 2.11-1.1.1版本,对其GC参数进行了一些调整,从而引起不断CMS GC:
- 调整kafka-server-start.sh
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx256m -Xms256m -Xmn64m
-server -XX:+UseConcMarkSweepGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50
-XX:+PrintGCDetails -XX:PrintFLSStatistics=2"
fi
- 调整kafka-run-class.sh
# JVM performance options
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
#KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC"
echo ""
fi
调整kafka-run-class.sh是为了屏蔽默认使用的G1,因为配置的CMS GC不能和G1共存。
接下来只需要启动一个kafka broker,然后利用kafka自带的压测脚本向broker发送1kw条消息(每条消息128个字节):
bin/kafka-run-class.sh org.apache.kafka.tools.ProducerPerformance --topic topic-afei-test --num-records 10000000 --record-size 128 --throughput -1 --producer-props acks=1 bootstrap.servers=127.0.1.168:9092 buffer.memory=67108864 batch.size=8196
- jstat结果
由jstat可知,FGC非常严重,每2s就有好几次的FGC:
[yyapp@redis-01 ~]$ jstat -gcutil 28982 2s
S0 S1 E O M CCS YGC YGCT FGC FGCT GCT
0.00 3.43 26.39 70.34 98.74 96.89 49 0.468 115 2.208 2.676
0.00 3.61 80.16 70.32 98.77 96.89 51 0.478 120 2.308 2.786
0.00 2.88 88.76 70.33 98.77 96.89 53 0.491 122 2.339 2.829
2.80 0.00 19.37 70.34 98.77 96.89 56 0.509 125 2.377 2.886
4.05 0.00 40.13 70.34 98.77 96.89 58 0.526 127 2.396 2.923
3.64 0.00 41.64 70.33 98.77 96.89 60 0.535 131 2.493 3.028
4.59 0.00 86.03 70.33 98.77 96.89 62 0.548 135 2.577 3.125
- gc日志
再看gc日志,kafka默认开启了gc日志(位于:logs/kafkaServer-gc.log.0.current):
2018-08-20T14:12:14.366+0800: 144.545: [CMS-concurrent-sweep: 0.009/0.009 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2018-08-20T14:12:14.372+0800: 144.552: [CMS-concurrent-reset-start]
2018-08-20T14:12:14.379+0800: 144.559: [CMS-concurrent-reset: 0.007/0.007 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2018-08-20T14:12:14.683+0800: 144.863: [GC (Allocation Failure) Before GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 7421969
Max Chunk Size: 7397322
Number of Blocks: 47
Av. Block Size: 157914
Tree Height: 11
Statistics for IndexedFreeLists:
--------------------------------
Total Free Space: 32849
Max Chunk Size: 254
Number of Blocks: 2065
Av. Block Size: 15
free=7454818 frag=0.0154
2018-08-20T14:12:14.683+0800: 144.863: [ParNew: 52960K->791K(59008K), 0.0287046 secs] 191199K->139086K(255616K)After GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 7421969
Max Chunk Size: 7397322
Number of Blocks: 47
Av. Block Size: 157914
Tree Height: 11
Statistics for IndexedFreeLists:
--------------------------------
Total Free Space: 25698
Max Chunk Size: 254
Number of Blocks: 452
Av. Block Size: 56
free=7447667 frag=0.0135
, 0.0289033 secs] [Times: user=0.03 sys=0.00, real=0.03 secs]
2018-08-20T14:12:14.714+0800: 144.894: [GC (CMS Initial Mark) [1 CMS-initial-mark: 138295K(196608K)] 139776K(255616K), 0.0024734 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2018-08-20T14:12:14.717+0800: 144.896: [CMS-concurrent-mark-start]
2018-08-20T14:12:14.739+0800: 144.918: [CMS-concurrent-mark: 0.022/0.022 secs] [Times: user=0.02 sys=0.00, real=0.02 secs]
由日志可知,
Tree Height
越来越大(初始为1)。而这个值越大,表示碎片率越严重。这个参数,能够让我们确定那些由于碎片化问题导致的CMS的并发模式失败。