一、注意事项
1,在config/server.properties文件中有两个端口
发布消息需要指定broker-list:9092(默认)
修改发布消息端口在 config/server.properties增加port=端口,参数来修改默认的9092端口
接收消息是zookeeper的2181端口(默认)
1,修改接收消息端口在 config/server.properties的zookeeper.connect,将IP和端口修改成
2,同时找到 zookeeper.properties修改clientPort。
以上1,2两处修改要确保端口一致,并且要和发布消息broker端口不能相同。
2,localhost和9092是producers和consumers默认IP和端口
所以如果你要远程连接,需要修改默认IP地址,找到config/server.properties
增加:advertised.host.name=123.57.218.39(如果需要远程连接,则需要修改IP)
3,enable.auto.commit参数问题
在kafka-consumer中enable.auto.commit参数为FALSE,表示不自动提交offset
offset是kafka发送消息监听,如果为FALSE,则需要手动提交,否则消息发送成功后就默认TRUE
本例中暂时没找到如何手动提交。
该场景应用于发送消息成功后,因为程序出了问题,会导致发送消息的数据丢失的BUG
二、异常情况
1,org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
这种情况一参考注意事项,就是producers默认的localhost,所以你远程连接超时,配置advertised.host.name就可以了
2,Failed to send messages after 3 tries
这种情况是端口有问题,这里要注意发布消息端口和接收消息端口要区分好,不要混淆,具体参考注意事项
三、spring+kafka实例
maven-pom.xml配置信息:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
<dependency>
<groupId> org.springframework.integration </groupId>
<artifactId> spring-integration-kafka </artifactId>
<version> 2.1.0.RELEASE </version>
</dependency>
<dependency>
<groupId> org.springframework </groupId>
<artifactId> spring-webmvc </artifactId>
<version> 4.3.0.RELEASE </version>
</dependency>
<dependency>
<groupId> org.springframework.kafka </groupId>
<artifactId> spring-kafka </artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
producer生产者配置:
1.如果你的topic没有设置名称,按照默认的topic的名字生成对应的数据文件夹。
2.producerListener用来判断kafka发送数据是否成功以及发送反馈信息。
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">
<!-- 定义producer的参数 -->
<bean id="producerProperties" class="java.util.HashMap">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="123.57.218.39:3456" /><!-- 发布者的IP和端口 -->
<entry key="group.id" value="0" />
<entry key="retries" value="1" />
<entry key="batch.size" value="16384" />
<entry key="linger.ms" value="1" />
<entry key="buffer.memory" value="33554432" />
<entry key="key.serializer"
value="org.apache.kafka.common.serialization.StringSerializer" />
<entry key="value.serializer"
value="org.apache.kafka.common.serialization.StringSerializer" />
</map>
</constructor-arg>
</bean>
<!-- 创建kafkatemplate需要使用的producerfactory bean -->
<bean id="producerFactory"
class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<ref bean="producerProperties" />
</constructor-arg>
</bean>
<!-- 创建kafkatemplate bean,使用的时候,只需要注入这个bean,即可使用template的send消息方法 -->
<bean id="KafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg ref="producerFactory" />
<constructor-arg name="autoFlush" value="true" />
<property name="defaultTopic" value="defaultTopic" />
<property name="producerListener" ref="producerListener"/>
</bean>
<bean id="producerListener" class="com.livzon.ydw.kafka.KafkaProducerListener" />
</beans>
consumer配置:
1.使用kafka的listener进行消息消费监听,如果有消费消息进入会自动调用OnMessage方法进行消息消费以及后续业务处理。
2.如果要配置多个topic,需要创建新的消费者容器,然后统一指向listner的消息处理类,统一让这个类进行后续业务处理。
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/jee
http://www.springframework.org/schema/jee/spring-jee-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">
<!-- 定义consumer的参数 -->
<bean id="consumerProperties" class="java.util.HashMap">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="123.57.218.39:3456"/><!-- 消费者的IP和端口 -->
<entry key="group.id" value="0"/>
<entry key="enable.auto.commit" value="false"/>
<entry key="auto.commit.interval.ms" value="1000"/>
<entry key="session.timeout.ms" value="15000"/>
<entry key="key.deserializer" value="org.apache.kafka.common.serialization.StringDeserializer"/>
<entry key="value.deserializer" value="org.apache.kafka.common.serialization.StringDeserializer"/>
</map>
</constructor-arg>
</bean>
<!-- 创建consumerFactory bean -->
<bean id="consumerFactory" class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
<constructor-arg>
<ref bean="consumerProperties"/>
</constructor-arg>
</bean>
<!-- 实际执行消息消费的类 -->
<bean id="messageListernerConsumerService" class="com.livzon.ydw.kafka.KafkaConsumerServer"/>
<!-- 消费者容器配置信息,如果有多个TOPIC则配置多个容器信息,如下图注释处 -->
<bean id="containerProperties_trade" class="org.springframework.kafka.listener.config.ContainerProperties">
<constructor-arg value="wyyMsgCallBack"/><!-- 这里写的是发布者的TOPIC -->
<property name="messageListener" ref="messageListernerConsumerService"/>
</bean>
<!-- <bean id="containerProperties_other" class="org.springframework.kafka.listener.config.ContainerProperties">
<constructor-arg value="other_test_topic"/>
<property name="messageListener" ref="messageListernerConsumerService"/>
</bean> -->
<!-- 创建messageListenerContainer bean,使用的时候,只需要注入这个bean -->
<bean id="messageListenerContainer_trade" class="org.springframework.kafka.listener.KafkaMessageListenerContainer"
init-method="doStart">
<constructor-arg ref="consumerFactory"/>
<constructor-arg ref="containerProperties_trade"/>
</bean>
<!-- <bean id="messageListenerContainer_other" class="org.springframework.kafka.listener.KafkaMessageListenerContainer"
init-method="doStart">
<constructor-arg ref="consumerFactory"/>
<constructor-arg ref="containerProperties_other"/>
</bean> -->
</beans>
xml文件加载:
1 <import resource="classpath:kafkaConsumer.xml" />
2 <import resource="classpath:kafkaProducer.xml" />
具体代码实现:
constant.java //常量类,常量数据
package com.livzon.ydw.kafka;
/**
* kafkaMessageConstant
* @author yangdw
*
*/
public class KafkaMesConstant {
public static final String SUCCESS_CODE = "00000";
public static final String SUCCESS_MES = "成功";
//kakfa-code
public static final String KAFKA_SEND_ERROR_CODE = "30001";
public static final String KAFKA_NO_RESULT_CODE = "30002";
public static final String KAFKA_NO_OFFSET_CODE = "30003";
//kakfa-mes
public static final String KAFKA_SEND_ERROR_MES = "发送消息超时,联系相关技术人员";
public static final String KAFKA_NO_RESULT_MES = "未查询到返回结果,联系相关技术人员";
public static final String KAFKA_NO_OFFSET_MES = "未查到返回数据的offset,联系相关技术人员";
}
kafkaProducerListener.java //生产者监听(监听生产者消息情况)-打印日志
package com.livzon.ydw.kafka;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.support.ProducerListener;
/**
* kafkaProducer监听器,在producer配置文件中开启
* @author yangdw
*
*/
@SuppressWarnings("rawtypes")
public class KafkaProducerListener implements ProducerListener{
protected final Logger LOG = LoggerFactory.getLogger("KafkaProducerListener");
/**
* 发送消息成功后调用
*/
@Override
public void onSuccess(String topic, Integer partition, Object key,
Object value, RecordMetadata recordMetadata) {
LOG.info("==========kafka发送数据成功(日志开始)==========");
LOG.info("----------topic:"+topic);
LOG.info("----------partition:"+partition);
LOG.info("----------key:"+key);
LOG.info("----------value:"+value);
LOG.info("----------RecordMetadata:"+recordMetadata);
LOG.info("~~~~~~~~~~kafka发送数据成功(日志结束)~~~~~~~~~~");
}
/**
* 发送消息错误后调用
*/
@Override
public void onError(String topic, Integer partition, Object key,
Object value, Exception exception) {
LOG.info("==========kafka发送数据错误(日志开始)==========");
LOG.info("----------topic:"+topic);
LOG.info("----------partition:"+partition);
LOG.info("----------key:"+key);
LOG.info("----------value:"+value);
LOG.info("----------Exception:"+exception);
LOG.info("~~~~~~~~~~kafka发送数据错误(日志结束)~~~~~~~~~~");
exception.printStackTrace();
}
/**
* 方法返回值代表是否启动kafkaProducer监听器
*/
@Override
public boolean isInterestedInSuccess() {
LOG.info("///kafkaProducer监听器启动///");
return true;
}
}
KafkaProducerServer //生产者实现类
package com.livzon.ydw.kafka;
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.ExecutionException;
import javax.annotation.Resource;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.util.concurrent.ListenableFuture;
import com.google.gson.Gson;
import net.sf.json.JSONObject;
/**
* kafkaProducer模板 使用此模板发送消息
*
* @author yangdw
*
*/
@Service
public class KafkaProducerServer {
@Resource
private KafkaTemplate<String, Object> kafkaTemplate;
/**
* kafka发送消息模板
*
* @param topic
* 主题
* @param value
* messageValue
* @param ifPartition
* 是否使用分区 0是\1不是
* @param partitionNum
* 分区数 如果是否使用分区为0,分区数必须大于0
* @param role
* 角色:bbc app erp...
*/
public Map<String, Object> sndMesForTemplate(String topic, Object value, String ifPartition, Integer partitionNum,
String role) {
String key = role + "-" + value.hashCode();
Gson gson = new Gson();
String valueString = gson.toJson(value);
if (ifPartition.equals("0")) {
// 表示使用分区
int partitionIndex = getPartitionIndex(key, partitionNum);
ListenableFuture<SendResult<String, Object>> result = kafkaTemplate.send(topic, partitionIndex, key,
valueString);
Map<String, Object> res = checkProRecord(result);
return res;
} else {
ListenableFuture<SendResult<String, Object>> result = kafkaTemplate.send(topic, key, valueString);
Map<String, Object> res = checkProRecord(result);
return res;
}
}
/**
* 根据key值获取分区索引
*
* @param key
* @param partitionNum
* @return
*/
private int getPartitionIndex(String key, int partitionNum) {
if (key == null) {
Random random = new Random();
return random.nextInt(partitionNum);
} else {
int result = Math.abs(key.hashCode()) % partitionNum;
return result;
}
}
/**
* 检查发送返回结果record
*
* @param res
* @return
*/
@SuppressWarnings("rawtypes")
private Map<String, Object> checkProRecord(ListenableFuture<SendResult<String, Object>> res) {
Map<String, Object> m = new HashMap<String, Object>();
if (res != null) {
try {
SendResult r = res.get();// 检查result结果集
// 检查recordMetadata的offset数据,不检查producerRecord
Long offsetIndex = r.getRecordMetadata().offset();
if (offsetIndex != null && offsetIndex >= 0) {
m.put("code", KafkaMesConstant.SUCCESS_CODE);
m.put("message", KafkaMesConstant.SUCCESS_MES);
return m;
} else {
m.put("code", KafkaMesConstant.KAFKA_NO_OFFSET_CODE);
m.put("message", KafkaMesConstant.KAFKA_NO_OFFSET_MES);
return m;
}
} catch (InterruptedException e) {
e.printStackTrace();
m.put("code", KafkaMesConstant.KAFKA_SEND_ERROR_CODE);
m.put("message", KafkaMesConstant.KAFKA_SEND_ERROR_MES);
return m;
} catch (ExecutionException e) {
e.printStackTrace();
m.put("code", KafkaMesConstant.KAFKA_SEND_ERROR_CODE);
m.put("message", KafkaMesConstant.KAFKA_SEND_ERROR_MES);
return m;
}
} else {
m.put("code", KafkaMesConstant.KAFKA_NO_RESULT_CODE);
m.put("message", KafkaMesConstant.KAFKA_NO_RESULT_MES);
return m;
}
}
}
KafkaConsumerServer //消费者实现类
package com.livzon.ydw.kafka;
import javax.annotation.Resource;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.listener.MessageListener;
import com.livzon.ydw.service.MsgService;
import net.sf.json.JSONObject;
/**
* kafka监听器启动 自动监听是否有消息需要消费
*
* @author yangdw
*
*/
public class KafkaConsumerServer implements MessageListener<String, Object> {
protected final Logger LOGGER = LoggerFactory.getLogger("KafkaConsumerServer");
@Resource
private MsgService msgService;
/**
* 监听器自动执行该方法 消费消息 自动提交offset 执行业务代码 (high level api
* 不提供offset管理,不能指定offset进行消费)
*/
@Override
public void onMessage(ConsumerRecord<String, Object> record) {
String topic = record.topic();
String value = (String) record.value();
LOGGER.info("=============kafka监听消息开始=============topic:" + topic + "=======value" + value);
JSONObject json = JSONObject.fromObject(value);
LOGGER.info("*****************解析回调消息json**********************json:" + json);
try {
msgService.msgHandle(json);
} catch (Exception ex) {
LOGGER.error("***********KAFKA消息监听异常" + ex.getMessage() + "***********", ex);
}
LOGGER.info("=============kafka监听消息开结束=============topic:" + topic + "=======value" + value);
}
}
注:此文是参考https://www.cnblogs.com/wangb0402/p/6187796.html地址写的
因为参考时有很多问题,所以重新归纳一份做自己的总结,同样分享出来,同时也非常感谢原作者的分享!