作者:NHHermit | 来源:互联网 | 2023-06-15 10:53
原理 (Streamming )
Spark Streaming
是核心 Spark API
的扩展,支持可扩展,高吞吐量,实时数据流的容错数据流处理。可以从 sources(如 Kafka、Flume、Kinesis、或者 TCP sockets)
获取数据,并通过复杂的算法处理数据,这些算法使用高级函数(如 map,reduce,join 和 window)
表示。
处理过的数据可以推送到文件系统、数据库和实时仪表板。你可以将 Spark
的机器学习和图形处理算法应用于数据流。
Spark Streaming
接受实时输入数据流,并把数据分成批,然后由 Spark
引擎处理,以批量生成最终结果流。
离散流 (Discretized Streams)
Discretized Streams 或者 DStream 是 Spark Streaming 提供的基本抽象。它表示一个可持续的数据流,或者是从 source 接收的输入数据流,或者是通过转换输入流生成的处理过的数据流。
Window 操作 (Window Operations)
Spark Streaming
也提供窗口化计算,这允许你在滑动的数据窗口上应用转换 transformations
。
应用:(案例)
1、依赖
<dependency><groupId>org.apache.spark</groupId><artifactId>spark-streaming_2.12</artifactId><version>2.4.4</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.12</artifactId><version>2.4.4</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.12</artifactId><version>2.4.4</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.12</artifactId><version>2.4.4</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-streaming-kafka-0-10_2.11</artifactId><version>2.1.0</version></dependency>
2、案例
package com.citydo.faceadd;import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.StreamingContext;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.dstream.DStream;
import org.apache.spark.streaming.dstream.InputDStream;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import scala.Tuple2;import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Objects;
import java.util.regex.Pattern;public class SparkSteamingDemo {private static final Pattern SPACE;static {SPACE &#61; Pattern.compile(" ");}public static void main(String[] args) throws InterruptedException {getWords();getKafka(args);}public static void getWords() throws InterruptedException {SparkConf conf &#61; new SparkConf().setMaster("local[2]").setAppName("streaming word count");JavaSparkContext sc &#61; new JavaSparkContext(conf);sc.setLogLevel("INFO");JavaStreamingContext ssc &#61; new JavaStreamingContext(sc, Durations.seconds(1));JavaReceiverInputDStream<String> lines &#61; ssc.socketTextStream("localhost", 9999);JavaPairDStream<String, Integer> wordCounts &#61;lines.flatMap(x->Arrays.asList(x.split(" ")).iterator()).mapToPair(x -> new Tuple2<>(x, 1)).reduceByKey((x, y) -> x &#43; y);wordCounts.print();ssc.start();ssc.awaitTermination();}public static void getKafka(String[] args) throws InterruptedException {String checkPointDir &#61; args[0];String batchTime &#61; args[1];String topics &#61; args[2];String brokers &#61; args[3];Duration batchDuration &#61; Durations.seconds(Integer.parseInt(batchTime));SparkConf conf &#61; new SparkConf().setAppName("streaming word count");JavaStreamingContext jase &#61; new JavaStreamingContext(conf, batchDuration);jase.checkpoint(checkPointDir);HashSet<String> topicsSet &#61; new HashSet<>(Arrays.asList(topics.split(",")));HashMap<String, String> kafkaParams &#61; new HashMap<>();kafkaParams.put("metadata.broker.list", brokers);InputDStream<ConsumerRecord<Object, Object>> lines;lines &#61; KafkaUtils.createDirectStream((StreamingContext) null,null,null,null);DStream<Object> records &#61; lines.count();DStream<Object> femaleRecords &#61; records.filter(null);DStream<Object> upTimeUser &#61; femaleRecords.filter(Objects::isNull);upTimeUser.print();jase.start();jase.awaitTermination();}}
参考&#xff1a;https://www.cloudera.com/tutorials/introduction-to-spark-streaming/.html