Kafka删除Topic后被自动创建

背景

最近在释放资源扩容到新集群,老集群由于还有极少部分Topic用户在使用,另外Topic总数比较多,准备保留一小波机器给剩余的Topic,其余没有使用的Topic进行删除处理,删除的时候发现每次删除完Topic,还会创建出来Topic只是分区是1个比较纠结,怀疑是老集群bug,老集群是基于社区Kafka内核版本0.8.2.1的,比较古老很多bug,社区已经不在处理。

分析

日志分析

[2020-07-07 18:22:44,683] INFO [Partition state machine on Controller 271]: Invoking state change to NonExistentPartition for partitions [org.horizon_divide_data,75],[org.horizon_divide_data,78],[org.horizon_divide_data,69],[org.horizon_divide_data,30],[org.horizon_divide_data,26],[org.horizon_divide_data,44],[org.horizon_divide_data,70],[org.horizon_divide_data,62],[org.horizon_divide_data,21],[org.horizon_divide_data,77],[org.horizon_divide_data,0],[org.horizon_divide_data,8],[org.horizon_divide_data,91],[org.horizon_divide_data,65],[org.horizon_divide_data,13],[org.horizon_divide_data,20],[org.horizon_divide_data,61],[org.horizon_divide_data,34],[org.horizon_divide_data,84],[org.horizon_divide_data,27],[org.horizon_divide_data,63],[org.horizon_divide_data,54],[org.horizon_divide_data,94],[org.horizon_divide_data,49],[org.horizon_divide_data,52],[org.horizon_divide_data,37],[org.horizon_divide_data,86],[org.horizon_divide_data,93],[org.horizon_divide_data,7],[org.horizon_divide_data,74],[org.horizon_divide_data,22],[org.horizon_divide_data,58],[org.horizon_divide_data,81],[org.horizon_divide_data,5],[org.horizon_divide_data,32],[org.horizon_divide_data,45],[org.horizon_divide_data,76],[org.horizon_divide_data,87],[org.horizon_divide_data,57],[org.horizon_divide_data,40],[org.horizon_divide_data,53],[org.horizon_divide_data,41],[org.horizon_divide_data,43],[org.horizon_divide_data,47],[org.horizon_divide_data,23],[org.horizon_divide_data,59],[org.horizon_divide_data,1],[org.horizon_divide_data,73],[org.horizon_divide_data,67],[org.horizon_divide_data,39],[org.horizon_divide_data,25],[org.horizon_divide_data,35],[org.horizon_divide_data,88],[org.horizon_divide_data,48],[org.horizon_divide_data,64],[org.horizon_divide_data,51],[org.horizon_divide_data,85],[org.horizon_divide_data,11],[org.horizon_divide_data,16],[org.horizon_divide_data,29],[org.horizon_divide_data,28],[org.horizon_divide_data,90],[org.horizon_divide_data,24],[org.horizon_divide_data,38],[org.horizon_divide_data,15],[org.horizon_divide_data,60],[org.horizon_divide_data,9],[org.horizon_divide_data,17],[org.horizon_divide_data,6],[org.horizon_divide_data,80],[org.horizon_divide_data,2],[org.horizon_divide_data,14],[org.horizon_divide_data,56],[org.horizon_divide_data,42],[org.horizon_divide_data,95],[org.horizon_divide_data,79],[org.horizon_divide_data,4],[org.horizon_divide_data,19],[org.horizon_divide_data,3],[org.horizon_divide_data,89],[org.horizon_divide_data,33],[org.horizon_divide_data,66],[org.horizon_divide_data,36],[org.horizon_divide_data,55],[org.horizon_divide_data,46],[org.horizon_divide_data,92],[org.horizon_divide_data,31],[org.horizon_divide_data,83],[org.horizon_divide_data,71],[org.horizon_divide_data,50],[org.horizon_divide_data,82],[org.horizon_divide_data,18],[org.horizon_divide_data,12],[org.horizon_divide_data,68],[org.horizon_divide_data,72],[org.horizon_divide_data,10] (kafka.controller.PartitionStateMachine)

[2020-07-07 18:22:44,999] INFO [delete-topics-thread-271], Deletion of topic org.horizon_divide_data successfully completed (kafka.controller.TopicDeletionManager$DeleteTopicsThread)

[2020-07-07 18:22:44,999] DEBUG [Topic Deletion Manager 271], Waiting for signal to start or continue topic deletion (kafka.controller.TopicDeletionManager)

[2020-07-07 18:22:45,008] DEBUG [TopicChangeListener on Controller 271]: Topic change listener fired for path /brokers/topics with children org.com.sankuai.bb.op.psgs,org.test_ysj_error_log...... (kafka.controller.PartitionStateMachine$TopicChangeListener)

[2020-07-07 18:22:45,172] INFO [TopicChangeListener on Controller 271]: New topics: [Set()], deleted topics: [Set()], new partition replica assignment [Map()] (kafka.controller.PartitionStateMachine$TopicChangeListener)

[2020-07-07 18:22:45,231] DEBUG [DeleteTopicsListener on 271]: Delete topics listener fired for topics org.sec_hids_bash_data to be deleted (kafka.controller.PartitionStateMachine$DeleteTopicsListener)

[2020-07-07 18:22:45,231] INFO [DeleteTopicsListener on 271]: Starting topic deletion for topics org.sec_hids_bash_data (kafka.controller.PartitionStateMachine$DeleteTopicsListener)

[2020-07-07 18:22:45,251] INFO [delete-topics-thread-271], Handling deletion for topics org.sec_hids_bash_data (kafka.controller.TopicDeletionManager$DeleteTopicsThread)

[2020-07-07 18:22:45,270] DEBUG [Replica state machine on controller 271]: Are all replicas for topic org.sec_hids_bash_data deleted Map([Topic=org.sec_hids_bash_data,Partition=0,Replica=334] -> OnlineReplica, [Topic=org.sec_hids_bash_data,Partition=0,Replica=860] -> OnlineReplica) (kafka.controller.ReplicaStateMachine)

[2020-07-07 18:22:45,333] INFO [delete-topics-thread-271], Deletion of topic org.sec_hids_bash_data (re)started (kafka.controller.TopicDeletionManager$DeleteTopicsThread)

[2020-07-07 18:22:45,333] INFO [Topic Deletion Manager 271], Topic deletion callback for org.sec_hids_bash_data (kafka.controller.TopicDeletionManager)

[2020-07-07 18:22:45,500] INFO [Topic Deletion Manager 271], Partition deletion callback for [org.sec_hids_bash_data,0] (kafka.controller.TopicDeletionManager)

[2020-07-07 18:22:54,178] INFO [Replica state machine on controller 271]: Invoking state change to OfflineReplica for replicas [Topic=org.sec_hids_bash_data,Partition=0,Replica=860],[Topic=org.sec_hids_bash_data,Partition=0,Replica=334] (kafka.controller.ReplicaStateMachine)

[2020-07-07 18:22:54,178] DEBUG [Controller 271]: Removing replica 860 from ISR 334,860 for partition [org.sec_hids_bash_data,0]. (kafka.controller.KafkaController)

[2020-07-07 18:22:54,179] WARN [Controller 271]: Cannot remove replica 860 from ISR of [org.sec_hids_bash_data,0] - leaderAndIsr is empty. (kafka.controller.KafkaController)

[2020-07-07 18:22:54,179] DEBUG [Controller 271]: Removing replica 334 from ISR 334,860 for partition [org.sec_hids_bash_data,0]. (kafka.controller.KafkaController)

[2020-07-07 18:22:54,180] WARN [Controller 271]: Cannot remove replica 334 from ISR of [org.sec_hids_bash_data,0] - leaderAndIsr is empty. (kafka.controller.KafkaController)

[2020-07-07 18:22:54,180] DEBUG The stop replica request (delete = true) sent to broker 334 is  (kafka.controller.ControllerBrokerRequestBatch)

[2020-07-07 18:22:54,180] DEBUG The stop replica request (delete = false) sent to broker 334 is [Topic=org.sec_hids_bash_data,Partition=0,Replica=334] (kafka.controller.ControllerBrokerRequestBatch)

[2020-07-07 18:22:54,180] DEBUG The stop replica request (delete = true) sent to broker 860 is  (kafka.controller.ControllerBrokerRequestBatch)

[2020-07-07 18:22:54,180] DEBUG The stop replica request (delete = false) sent to broker 860 is [Topic=org.sec_hids_bash_data,Partition=0,Replica=860] (kafka.controller.ControllerBrokerRequestBatch)

[2020-07-07 18:22:54,180] DEBUG [Topic Deletion Manager 271], Deletion started for replicas [Topic=org.sec_hids_bash_data,Partition=0,Replica=860],[Topic=org.sec_hids_bash_data,Partition=0,Replica=334] (kafka.controller.TopicDeletionManager)

[2020-07-07 18:22:54,180] INFO [Replica state machine on controller 271]: Invoking state change to ReplicaDeletionStarted for replicas [Topic=org.sec_hids_bash_data,Partition=0,Replica=860],[Topic=org.sec_hids_bash_data,Partition=0,Replica=334] (kafka.controller.ReplicaStateMachine)

[2020-07-07 18:22:54,180] DEBUG [Topic Deletion Manager 271], Waiting for signal to start or continue topic deletion (kafka.controller.TopicDeletionManager)

[2020-07-07 18:22:54,189] DEBUG [TopicChangeListener on Controller 271]: Topic change listener fired for path /brokers/topics with children org.com.sankuai.bb.op.psgs,org.test_ysj_error_log... (kafka.controller.PartitionStateMachine$TopicChangeListener)

[2020-07-07 18:22:54,355] INFO [TopicChangeListener on Controller 271]: New topics: [Set(org.horizon_divide_data)], deleted topics: [Set()], new partition replica assignment [Map([org.horizon_divide_data,0] -> List(749, 994))] (kafka.controller.PartitionStateMachine$TopicChangeListener)

[2020-07-07 18:22:54,356] INFO [Controller 271]: New topic creation callback for [org.horizon_divide_data,0] (kafka.controller.KafkaController)

[2020-07-07 18:22:54,356] INFO [Controller 271]: New partition creation callback for [org.horizon_divide_data,0] (kafka.controller.KafkaController)

[2020-07-07 18:22:54,356] INFO [Partition state machine on Controller 271]: Invoking state change to NewPartition for partitions [org.horizon_divide_data,0] (kafka.controller.PartitionStateMachine)

[2020-07-07 18:22:54,357] INFO [Replica state machine on controller 271]: Invoking state change to NewReplica for replicas [Topic=org.horizon_divide_data,Partition=0,Replica=749],[Topic=org.horizon_divide_data,Partition=0,Replica=994] (kafka.controller.ReplicaStateMachine)

[2020-07-07 18:22:54,358] INFO [Partition state machine on Controller 271]: Invoking state change to OnlinePartition for partitions [org.horizon_divide_data,0] (kafka.controller.PartitionStateMachine)

[2020-07-07 18:22:54,358] DEBUG [Partition state machine on Controller 271]: Live assigned replicas for partition [org.horizon_divide_data,0] are: [List(749, 994)] (kafka.controller.PartitionStateMachine)

[2020-07-07 18:22:54,358] DEBUG [Partition state machine on Controller 271]: Initializing leader and isr for partition [org.horizon_divide_data,0] to (Leader:749,ISR:749,994,LeaderEpoch:0,ControllerEpoch:48) (kafka.controller.PartitionStateMachine)

[2020-07-07 18:22:54,396] INFO [Replica state machine on controller 271]: Invoking state change to OnlineReplica for replicas [Topic=org.horizon_divide_data,Partition=0,Replica=749],[Topic=org.horizon_divide_data,Partition=0,Replica=994] (kafka.controller.ReplicaStateMachine)

日志有些长,涉及到的topic清理掉了,问题出现在删除之后更新了两次/brokers/topics zk节点, 第一次不包含被删除的Topic,第二次包含被删除的Topic

查看源码

删除Topic

日志里可以看到 Deletion of topic xxxx successfully completed,并且此日志是介于两次更新zk节点之间;

TopicDeletionManager
completeDeleteTopic

分析completeDeleteTopic逻辑是将删除Topic的/brokers/topics、/config/topics、/admin/delete_topics zk节点,并且删除内存里的Topic信息。与日志[2020-07-07 18:22:45,172] INFO [TopicChangeListener on Controller 271]: New topics: [Set()], deleted topics: [Set()], new partition replica assignment [Map()] (kafka.controller.PartitionStateMachine$TopicChangeListener)对得上

创建Topic

[2020-07-07 18:22:54,355] INFO [TopicChangeListener on Controller 271]: New topics: [Set(org.horizon_divide_data)], deleted topics: [Set()], new partition replica assignment [Map([org.horizon_divide_data,0] -> List(749, 994))] (kafka.controller.PartitionStateMachine$TopicChangeListener)[2020-07-07 18:22:54,356] INFO [Controller 271]: New topic creation callback for [org.horizon_divide_data,0] (kafka.controller.KafkaController)[2020-07-07 18:22:54,356] INFO [Controller 271]: New partition creation callback for [org.horizon_divide_data,0] (kafka.controller.KafkaController)

New topic creation callback for 日志对应代码

PartitionStateMachine.scala

日志里第二次出现TopicChangeListener监听的日志如上,此时zk上多了org.horizon_divide_data topic,而刚上面说过,Controller内存中已经remove掉了,因此代码中newTopics应该是包含了删除的org.horizon_divide_data topic;

疑问

为啥第二次更新zk节点的时候,带了这个topic呢,是哪里触发的这次更新呢?有两点推测:1.zk更新那里是递归更新的会不会触发两次监听;2.集群设置了auto.create.topics.enable因为日志收集,避免造成收集侧collector拥堵,可能有关系

1.查看zk递归删除逻辑

ZkClient

代码中看到传参数watch->false,递归删除从叶子节点到枝干节点删除,逻辑看上去没问题,在确认下参数watch是干嘛的?看Document说明大概意思是true的时候会监听节点变更,false不监听节点变更;zk递归删除引起调用两次listener排除(PS:如果真的是因为递归删除引起,递归删除是从child到parent删除,逻辑上应该是日志里第一次包含被删除的Topic,而第二次不包含被删除的Topic才对,现实中恰恰相反)

ZkConnection

2.查看auto.create.topics.enable相关代码,逻辑上看,如果请求TopicMeta,而broker meta中不存在这个Topic,那么broker会调用createTopic,而我们知道创建Topic逻辑就是会写zk /brokers/topics/节点。查看broker日志发现server.log.2:[2020-07-07 17:48:24,353] INFO [KafkaApi-253] Auto creation of topic org.com.dianping.seer.cube2 with 1 partitions and replication factor 2 is successful! (kafka.server.KafkaApis)有自动创建的情况,这个也是批量删除的其中一个topic,这里偷懒1共600多个broker,而我们知道获取client获取meta的逻辑是先根据bootstrap server 创建链接,然后自己有个算法,大概就是判断连接数最少的一个broker跟这个broker链接,获取broker内存中的meta,这边kafka日志没有收集到es、hive之类的,没有聚合,就没在继续查;另外我们内部日志收集其实还在一直链接老平台broker只是没有写数据而已。

KafkaApis

结论

如果集群broker配置了auto.create.topics.enable,同时有获取topicmeta请求就会自动创建Topic,此处逻辑也可以看到,如果集群中不是全部broker开启了这个配置,那其实只要请求落在配置了这个参数的broker上就会自动创建这个Topic

其他想说的,Kafka的日志打得不是特别好,很多地方日志级别都是设置比较低,根本不知道服务在干嘛,controller日志相对多一些,这里有个工具可以推荐阿里的Arthas https://alibaba.github.io/arthas/ ,可以通过它动态修改日志级别使用起来也很简单,大概就是先确定classloader的hash值,输入类似sc -d kafka.controller.ControllerContext(找代码里的类即可)我这里以321ea24为例,找到hash值通过logger -c xxx(前面找到的hash值)即可查看这个classloader里面有几个日志配置,找到你要修改的日志name这里修改controller的日志name是kafka.controller ,对应执行 logger -c 321ea24 -n kafka.controller -l info,即可调整。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 217,509评论 6 504
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,806评论 3 394
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 163,875评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,441评论 1 293
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,488评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,365评论 1 302
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,190评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,062评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,500评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,706评论 3 335
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,834评论 1 347
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,559评论 5 345
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,167评论 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,779评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,912评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,958评论 2 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,779评论 2 354