window下安装kafka和zooker,超详细: https://blog.csdn.net/weixin_33446857/article/details/81982455
kafka:安装下载教程网址(CentOS Linux): https://www.cnblogs.com/subendong/p/7786547.html
zooker的下载安装网址: https://blog.csdn.net/ring300/article/details/80446918
一、准备工作:
提前说明:如果你运行出问题,请检查Kafka的版本与SpringBoot的版本是否与我文中的一致,本文中的环境已经经过测试。
Kafka服务版本为 kafka_2.11-1.1.0 (Scala), 也就是1.1.0
SpringBoot版本:1.5.10.RELEASE
提前启动zk,kafka,并且创建一个Topic
[root@Basic kafka_2.11-1.1.0]# bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test_topic
确保你的kafka能够访问,如果访问不了,需要打开外网访问。
config/server.properties
advertised.listeners=PLAINTEXT://192.168.239.128:9092
Maven 依赖:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>cn.demo</groupId> <artifactId>springboot_kafka</artifactId> <version>0.0.1-SNAPSHOT</version> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.3.RELEASE</version> <relativePath/> </parent> <dependencies> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>2.10.4</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.11.0.0</version> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> <!-- spring boot start --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> </project>
二、项目结构:
为了更加体现实际开发需求,一般生产者都是在调用某些接口的服务处理完逻辑之后然后往kafka里面扔数据,然后有一个消费者不停的监控这个Topic,然后处理数据,所以这里把生产者作为一个接口,消费者放到kafka这个目录下,注意@Component注解,不然扫描不到@KafkaListener
三、具体实现代码:
SpringBoot配置文件
application.yml
spring: kafka: bootstrap-servers: 192.168.239.128:9092 producer: key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer consumer: group-id: test enable-auto-commit: true auto-commit-interval: 1000 key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
生产者
package cn.saytime.web; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; /** * 测试kafka生产者 */ @RestController @RequestMapping("kafka") public class TestKafkaProducerController { @Autowired private KafkaTemplate<String, String> kafkaTemplate; @RequestMapping("send") public String send(String msg){ kafkaTemplate.send("test_topic", msg); return "success"; } }
消费者
这里的消费者会监听这个主题,有消息就会执行,不需要进行while(true)
package cn.saytime.kafka; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.stereotype.Component; /** * kafka消费者测试 */ @Component public class TestConsumer { @KafkaListener(topics = "test_topic") public void listen (ConsumerRecord<?, ?> record) throws Exception { System.out.printf("topic = %s, offset = %d, value = %s /n", record.topic(), record.offset(), record.value()); } }
项目启动类:
package cn.saytime; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class TestApplication{ public static void main(String[] args) { SpringApplication.run(TestApplication.class, args); } }
四、测试
运行项目,执行:http://localhost:8080/kafka/send?msg=hello
控制台输出:
topic = test_topic, offset = 19, value = hello
1
为了体现消费者不止执行一次就结束,再调用一次接口:
http://localhost:8080/kafka/send?msg=kafka
topic = test_topic, offset = 20, value = kafka
1
所以可以看到这里消费者实际上是不停的poll Topic数据的。