flink中数据的落地,是使用sink来处理的。
上面例子中已经可以看到可以使用DataStream.addSink()方法来添加数据落地的目标,表示将数据输出到对应目的地。
RichSinkFunction及它的爸爸们:
flink中的sink可以自定义实现,一般需要继承抽象类RichSinkFunction,与数据源RichSourceFunction非常类似,看下实现代码:
package org.apache.flink.streaming.api.functions.sink;
import org.apache.flink.annotation.Public;
import org.apache.flink.api.common.functions.AbstractRichFunction;
@Public
public abstract class RichSinkFunction<IN> extends AbstractRichFunction implements SinkFunction<IN> {
private static final long serialVersionUID = 1L;
public RichSinkFunction() {
}
}
可以看下,依然是继承AbstractRichFunction,跟RichSourceFunction一样,这里在之前文章中已经看过。
而主要的方法则是来自SinkFunction<IN>,IN是一个输入泛型,代表需要sink的数据类型。看下SinkFunction的定义:
package org.apache.flink.streaming.api.functions.sink;
import java.io.Serializable;
import org.apache.flink.annotation.Public;
import org.apache.flink.api.common.functions.Function;
@Public
public interface SinkFunction<IN> extends Function, Serializable {
/** @deprecated */
@Deprecated
default void invoke(IN value) throws Exception {
}
default void invoke(IN value, SinkFunction.Context context) throws Exception {
this.invoke(value);
}
@Public
public interface Context<T> {
long currentProcessingTime();
long currentWatermark();
Long timestamp();
}
}
可以看到,这里主要需要关注的是invoke方法。
RichSinkFunction的儿子
这里看下自己实现的mysql数据源:
package myflink.sinks;
import com.alibaba.fastjson.JSON;
import lombok.AllArgsConstructor;
import lombok.NoArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import myflink.manager.UrlInfoManager;
import myflink.model.UrlInfo;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.support.ClassPathXmlApplicationContext;
/**
* sink,用于将数据沉淀存储在不同的位置
* 这里存储在mysql中的url_info表
*/
@Slf4j
@NoArgsConstructor
@AllArgsConstructor
public class UrlMysqlSink extends RichSinkFunction<UrlInfo> implements ApplicationContextAware {
private UrlInfoManager urlInfoManager;
private ApplicationContext applicationContext;
@Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
log.info("applicationContext=" + applicationContext);
if (applicationContext == null) {
init();
}
}
@Override
public void invoke(UrlInfo value, Context context) throws Exception {
if (urlInfoManager == null) {
init();
}
urlInfoManager.insert(value);
log.info("---insert url info:", JSON.toJSONString(value));
}
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
private void init() {
applicationContext = new ClassPathXmlApplicationContext("classpath*:applicationContext.xml");
urlInfoManager = (UrlInfoManager) applicationContext.getBean("urlInfoManager");
}
}
可以看到,这里主要是重写其中的两个方法,open、invoke;在open中初始化spring容器、数据库链接,在invoke中执行具体的持久化逻辑。
使用UrlMySqlSink
数据来源依然来自kafka,复用之前的kafkaSender。
import java.util.Properties;
@Slf4j
public class KafkaUrlSinkJob {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("zookeeper.connect", "localhost:2181");
properties.put("group.id", "metric-group");
properties.put("auto.offset.reset", "latest");
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
SingleOutputStreamOperator<UrlInfo> dataStreamSource = env.addSource(
new FlinkKafkaConsumer010<String>(
"testjin",// topic
new SimpleStringSchema(),
properties
)
).setParallelism(1)
// map操作,转换,从一个数据流转换成另一个数据流,这里是从string-->UrlInfo
.map(string -> JSON.parseObject(string, UrlInfo.class));
dataStreamSource.addSink(new UrlMysqlSink());
dataStreamSource.addSink(new PrintSinkFunction<>());
env.execute("save url to db");
直接在datasource中addSink即可。一个datasource可以同时添加多个sink。
注意,dataSource是添加到StreamExecutionEnvironment实例上的,而sink则是直接添加到dataStreamSource上的。