首页 技术 正文
技术 2022年11月18日
0 收藏 613 点赞 2,765 浏览 4372 个字

Spark2.11 两种流操作 + Kafka

Spark2.x 自从引入了 Structured Streaming 后,未来数据操作将逐步转化到 DataFrame/DataSet,以下将介绍 Spark2.x 如何与 Kafka0.10+整合

Structured Streaming + Kafka

  1. 引包
groupId = org.apache.spark
artifactId = spark-sql-kafka-0-10_2.11
version = 2.1.1

为了让更直观的展示包的依赖,以下是我的工程 sbt 文件

name := "spark-test"
version := "1.0"
scalaVersion := "2.11.7"
// https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.1.1" % "provided"
// https://mvnrepository.com/artifact/org.apache.spark/spark-mllib_2.11
libraryDependencies += "org.apache.spark" % "spark-mllib_2.11" % "2.1.1" % "provided"
// https://mvnrepository.com/artifact/org.apache.spark/spark-streaming_2.11
libraryDependencies += "org.apache.spark" % "spark-streaming_2.11" % "2.1.1" % "provided"
// https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.7.3"
// https://mvnrepository.com/artifact/mysql/mysql-connector-java
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.38"
// https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11
libraryDependencies += "org.apache.kafka" % "kafka_2.11" % "0.10.2.1"
//libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.1"
libraryDependencies += "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.1.1"
  1. Structured Streaming 连接 Kafka
def main(args: Array[String]): Unit = {    val spark = SparkSession
.builder()
.appName("Spark structured streaming Kafka example")
// .master("local[2]")
.getOrCreate() val inputstream = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "127.0.0.1:9092")
.option("subscribe", "testss")
.load()
import spark.implicits._
val query = inputstream.select($"key", $"value")
.as[(String, String)].map(kv => kv._1 + " " + kv._2).as[String]
.writeStream
.outputMode("append")
.format("console")
.start() query.awaitTermination()
}

流的元数据如下

Column Type
key binary
value binary
topic string
partition int
offset long
timestamp long
timestampType int

可配参数

Option value meaning
assign json string {“topicA”:[0,1],”topicB”:[2,4]} 用于指定消费的 TopicPartitions,assignsubscribesubscribePattern 是三种消费方式,只能同时指定一个
subscribe A comma-separated list of topics 用于指定要消费的 topic
subscribePattern Java regex string 使用正则表达式匹配消费的 topic
kafka.bootstrap.servers A comma-separated list of host:port kafka brokers

不能配置的参数

  • group.id: 对每个查询,kafka 自动创建一个唯一的 group
  • auto.offset.reset: 可以通过 startingOffsets 指定,Structured Streaming 会对任何流数据维护 offset, 以保证承诺的 exactly once.
  • key.deserializer: 在 DataFrame 上指定,默认 ByteArrayDeserializer
  • value.deserializer: 在 DataFrame 上指定,默认 ByteArrayDeserializer
  • enable.auto.commit:
  • interceptor.classes:

Stream + Kafka

  1. 从最新offset开始消费

    def main(args: Array[String]): Unit = {
    val kafkaParams = Map[String, Object](
    "bootstrap.servers" -> "localhost:9092",
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> "use_a_separate_group_id_for_each_stream",
    "auto.offset.reset" -> "latest",
    "enable.auto.commit" -> (false: java.lang.Boolean)
    )val ssc =new StreamingContext(OpContext.sc, Seconds(2))
    val topics = Array("test")
    val stream = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](topics, kafkaParams)
    )
    stream.foreachRDD(rdd=>{
    val offsetRanges=rdd.asInstanceOf[HasOffsetRanges].offsetRanges
    rdd.foreachPartition(iter=>{
    val o: OffsetRange = offsetRanges(TaskContext.get.partitionId)
    println(s"${o.topic} ${o.partition} ${o.fromOffset} ${o.untilOffset}")
    })
    stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
    })// stream.map(record => (record.key, record.value)).print(1)
    ssc.start()
    ssc.awaitTermination()
    }
  2. 从指定的offset开始消费

    def main(args: Array[String]): Unit = {
    val kafkaParams = Map[String, Object](
    "bootstrap.servers" -> "localhost:9092",
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> "use_a_separate_group_id_for_each_stream",
    // "auto.offset.reset" -> "latest",
    "enable.auto.commit" -> (false: java.lang.Boolean)
    )
    val ssc = new StreamingContext(OpContext.sc, Seconds(2))
    val fromOffsets = Map(new TopicPartition("test", 0) -> 1100449855L)
    val stream = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Assign[String, String](fromOffsets.keys.toList, kafkaParams, fromOffsets)
    ) stream.foreachRDD(rdd => {
    val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
    for (o <- offsetRanges) {
    println(s"${o.topic} ${o.partition} ${o.fromOffset} ${o.untilOffset}")
    }
    stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
    }) // stream.map(record => (record.key, record.value)).print(1)
    ssc.start()
    ssc.awaitTermination()
    }
相关推荐
python开发_常用的python模块及安装方法
adodb:我们领导推荐的数据库连接组件bsddb3:BerkeleyDB的连接组件Cheetah-1.0:我比较喜欢这个版本的cheeta…
日期:2022-11-24 点赞:878 阅读:9,086
Educational Codeforces Round 11 C. Hard Process 二分
C. Hard Process题目连接:http://www.codeforces.com/contest/660/problem/CDes…
日期:2022-11-24 点赞:807 阅读:5,561
下载Ubuntn 17.04 内核源代码
zengkefu@server1:/usr/src$ uname -aLinux server1 4.10.0-19-generic #21…
日期:2022-11-24 点赞:569 阅读:6,410
可用Active Desktop Calendar V7.86 注册码序列号
可用Active Desktop Calendar V7.86 注册码序列号Name: www.greendown.cn Code: &nb…
日期:2022-11-24 点赞:733 阅读:6,183
Android调用系统相机、自定义相机、处理大图片
Android调用系统相机和自定义相机实例本博文主要是介绍了android上使用相机进行拍照并显示的两种方式,并且由于涉及到要把拍到的照片显…
日期:2022-11-24 点赞:512 阅读:7,820
Struts的使用
一、Struts2的获取  Struts的官方网站为:http://struts.apache.org/  下载完Struts2的jar包,…
日期:2022-11-24 点赞:671 阅读:4,903