作者:湘刘涛 | 来源:互联网 | 2023-08-04 18:14
小编给大家分享一下Flink中Transform怎么用,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!
分组聚合
String path = "E:\\GIT\\flink-learn\\flink-learn\\telemetering.txt";
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
TupleTypeInfo> typeInfo = new TupleTypeInfo<>(Types.STRING, Types.DOUBLE, Types.LONG);
TupleCsvInputFormat> tupleCsvInputFormat =
new TupleCsvInputFormat<>(new Path(path), typeInfo);
DataStreamSource> dataStreamSource = env.createInput(tupleCsvInputFormat, typeInfo);
//或 DataStreamSource> dataStreamSource = env.readFile(tupleCsvInputFormat, path);
SingleOutputStreamOperator> operator = dataStreamSource
.filter(Objects::nonNull)
// .map()
// .flatMap()
// .keyBy(0)
.keyBy(tuple -> tuple.f0)
.minBy(1);
// .min()
// .max(1);
// .maxBy(1, false);
// .sum(1);
// .reduce();
// .process();
operator.print().setParallelism(1);
env.execute();
分流/合流
String path = "E:\\GIT\\flink-learn\\flink-learn\\telemetering.txt";
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
PojoTypeInfo typeInfo = (PojoTypeInfo) Types.POJO(TelemeterDTO.class);
PojoCsvInputFormat inputFormat = new PojoCsvInputFormat<>(new Path(path), typeInfo, new String[]{"code", "value", "timestamp"});
DataStreamSource dataStreamSource = env.createInput(inputFormat, typeInfo);
//分流
SplitStream splitStream = dataStreamSource
.split(item -> {
if (item.getValue() > 100) {
return Collections.singletonList("high");
}
return Collections.singletonList("low");
});
DataStream highStream = splitStream.select("high");
DataStream lowStream = splitStream.select("low");
//合流
ConnectedStreams connectedStreams = lowStream.connect(highStream);
// DataStream unionDataStream = lowStream.union(highStream); //需要类型一致
SingleOutputStreamOperator> operator = connectedStreams
.map(new CoMapFunction>() {
@Override
public Tuple3 map1(TelemeterDTO value) {
return Tuple3.of(value.getCode(), value.getValue(), value.getTimestamp());
}
@Override
public Tuple3 map2(TelemeterDTO value) {
return Tuple3.of(value.getCode(), value.getValue(), value.getTimestamp());
}
});
operator.print();
env.execute();
UDF函数,提供底层支持
MapFunction
FilterFunction
ReduceFunction
ProcessFunction
SourceFunction
SinkFunction
富函数
富函数 包含了生命周期,及上下文相关信息,如
open() 可以在算子创建之初建立数据库连接
close() 在在算子生命结束之前关闭资源
以上是“Flink中Transform怎么用”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注编程笔记行业资讯频道!