首页 技术 正文
技术 2022年11月14日
0 收藏 909 点赞 3,864 浏览 9486 个字

先列参考文献:

Spark Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher):http://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html

kafka(Java Client端Producer API):http://kafka.apache.org/documentation/#producerapi

版本:

spark:2.1.1
scala:2.11.12
kafka运行版本:2.3.0
spark-streaming-kafka-0-10_2.11:2.2.0

开发环境:

  3台虚拟机部署kafka,域名分别为coo1、coo2、coo3,部署版本如上,zookeeper版本3.4.7

  在kafka上创建topic:xzrz,replica为3,partition为4;

./kafka-topics.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --create --topic xzrz --replication-factor 3 --partitions 4

  准备代码环境:

  一个是Java端的kafka发送端:KafkaSender.java

  另一个是scale端的spark-streaming-kafka消费端,KafkaStreaming.scala

kafka发送端:

  maven配置:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>kafkaTest</groupId>
<artifactId>kafkaTest</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13-beta-2</version>
<scope>compile</scope>
</dependency>
</dependencies>
</project>
KafkaSender.java
 package gjm; import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.junit.Test; import java.util.Properties;
import java.util.concurrent.ExecutionException; public class KafkaSender {
@Test
public void producer() throws InterruptedException, ExecutionException {
Properties props = new Properties();
props.put("key.serializer", "org.apache.kafka.common.serialization.IntegerSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "coo1:9092,coo2:9092,coo3:9092");
// props.put(ProducerConfig.BATCH_SIZE_CONFIG,"1024");
// props.put(ProducerConfig.BUFFER_MEMORY_CONFIG,"0");
//配置kafka语义exacts once语义
props.put("acks", "all");
props.put("enable.idempotence", "true");
Producer<Integer, String> kafkaProducer = new KafkaProducer<Integer, String>(props);
for (int j = 0; j < 1; j++)
for (int i = 0; i < 100; i++) {
ProducerRecord<Integer, String> message = new ProducerRecord<Integer, String>("xzrz", "{wo|2019-12-12|1|2|0|5}");
kafkaProducer.send(message);
}
//这个flush和close一定要写,类似于流操作
//因为kafka自带betch和buffer,如果没有这两行代码,一是浪费资源,二是有可能消息没有发送到kafka中,依旧保留在本地betch中
kafkaProducer.flush();
kafkaProducer.close();
}
}

kafka消费端–>sparkstreaming-kafka–>KafkaStreaming.scala代码:

  maven:

 <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>sparkMVN</groupId>
<artifactId>sparkMVN</artifactId>
<version>1.0-SNAPSHOT</version> <properties>
<spark.version>2.1.1</spark.version>
<hadoop.version>2.7.3</hadoop.version>
<hbase.version>0.98.17-hadoop2</hbase.version>
</properties>
<dependencies> <dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency> <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
<!--这行在local模式中,一定要注销,否则会导致找不到spark context类的异常-->
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency> <!-- hadoop -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency> <!--hbase-->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency>
</dependencies>
</project>
KafkaStreaming.scala代码:
 package gjm.sparkDemos import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.{CanCommitOffsets, HasOffsetRanges, KafkaUtils}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import org.slf4j.LoggerFactory object KafkaStreaming {
def main(args: Array[String]): Unit = {
val LOG = LoggerFactory.getLogger(KafkaStreaming.getClass)
LOG.info("Streaming start----->") val conf = new SparkConf().setMaster("local[6]")//这里设置消费kafka的线程数为6,看看会有什么情况
.setAppName("KafkaStreaming")
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(3))
val topics = Array("xzrz")
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "coo1:9092,coo2:9092,coo3:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "fwjkcx",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
// "heartbeat.interval.ms" -> (90000: java.lang.Integer),
// "session.timeout.ms" -> (120000: java.lang.Integer),
// "group.max.session.timeout.ms" -> (120000: java.lang.Integer),
// "request.timeout.ms" -> (130000: java.lang.Integer),
// "fetch.max.wait.ms" -> (120000: java.lang.Integer)
) val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
LOG.info("Streaming had Created----->")
LOG.info("Streaming Consuming msg----->")
stream.foreachRDD { rdd =>
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
rdd.foreachPartition(recordIt => {
for (record <- recordIt) {
LOG.info("Message recode info : Topics-->{},partition-->{}, checkNum-->{}, offset-->{}, value-->{}", record.topic(), record.partition().toString, record.checksum().toString, record.offset().toString, record.value())
}
})
// some time later, after outputs have completed
stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}
ssc.start()
ssc.awaitTermination()
}
}

验证测试:

1、使用发送端发送100条消息;
2、启动kafka自带的consumer消费端,group id为test;
sh kafka-console-consumer.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --topic xzrz --from-beginning --group test
3、启动spark-stream-kafka,在代码中已经设置流的间隔时间为每3s一次;
4、使用kafka自带的group消费情况查询消费情况:
 ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group test
./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcx
结果:
1、首先是测试的test消费端:消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group testGROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
test xzrz 0 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
test xzrz 1 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
test xzrz 2 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
test xzrz 3 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
观察发现,4个分区同一个消费者线程,一共消费了100条。
2、spark-streaming的消费端:消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcxGROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
fwjkcx xzrz 0 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
fwjkcx xzrz 1 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
fwjkcx xzrz 2 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
fwjkcx xzrz 3 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
观察发现,4个分区每个分区消费25条,符合正常认知
3、多增加一个实验,现在将spart-stream的local数量改为3,更改消费者组为fwjkcx01,观察消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcx01GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
fwjkcx01 xzrz 0 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
fwjkcx01 xzrz 1 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
fwjkcx01 xzrz 2 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
fwjkcx01 xzrz 3 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
发现仍然是4个线程在消费,所以在local位置指定线程数量根本不生效。
4、这时候再发1000条消息,观察group:fwjkcx的消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcxGROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
fwjkcx xzrz 0 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
fwjkcx xzrz 1 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
fwjkcx xzrz 2 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
fwjkcx xzrz 3 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
一切正常。
相关推荐
python开发_常用的python模块及安装方法
adodb:我们领导推荐的数据库连接组件bsddb3:BerkeleyDB的连接组件Cheetah-1.0:我比较喜欢这个版本的cheeta…
日期:2022-11-24 点赞:878 阅读:9,103
Educational Codeforces Round 11 C. Hard Process 二分
C. Hard Process题目连接:http://www.codeforces.com/contest/660/problem/CDes…
日期:2022-11-24 点赞:807 阅读:5,579
下载Ubuntn 17.04 内核源代码
zengkefu@server1:/usr/src$ uname -aLinux server1 4.10.0-19-generic #21…
日期:2022-11-24 点赞:569 阅读:6,427
可用Active Desktop Calendar V7.86 注册码序列号
可用Active Desktop Calendar V7.86 注册码序列号Name: www.greendown.cn Code: &nb…
日期:2022-11-24 点赞:733 阅读:6,199
Android调用系统相机、自定义相机、处理大图片
Android调用系统相机和自定义相机实例本博文主要是介绍了android上使用相机进行拍照并显示的两种方式,并且由于涉及到要把拍到的照片显…
日期:2022-11-24 点赞:512 阅读:7,834
Struts的使用
一、Struts2的获取  Struts的官方网站为:http://struts.apache.org/  下载完Struts2的jar包,…
日期:2022-11-24 点赞:671 阅读:4,917