100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > Docker 进阶(九)案例:Docker Swarm 搭建 zookeeper + kafka 集群

Docker 进阶(九)案例:Docker Swarm 搭建 zookeeper + kafka 集群

时间:2023-02-08 13:31:10

相关推荐

Docker 进阶(九)案例:Docker Swarm 搭建 zookeeper + kafka 集群

案例:搭建 zookeeper + kafka 集群

ps:Docker Swarm 基本不用了奥,现在都用 K8S

version: '3.8'services:zoo1:image: zookeepernetworks:- zookeeper_kafkahostname: zoo1# 端口ports:- 2181:2181environment:ZOO_MY_ID: 1ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181volumes:- "/env/zk/zookeeper/conf1:/conf"- "/env/zk/zookeeper/data1:/data"- "/env/zk/zookeeper/datalog1:/datalog"deploy:mode: replicatedreplicas: 1zoo2:image: zookeepernetworks:- zookeeper_kafkahostname: zoo2# 端口ports: - 2182:2181environment:ZOO_MY_ID: 2ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181volumes:- "/env/zk/zookeeper/conf2:/conf"- "/env/zk/zookeeper/data2:/data"- "/env/zk/zookeeper/datalog2:/datalog"deploy:mode: replicatedreplicas: 1zoo3:image: zookeepernetworks:- zookeeper_kafkahostname: zoo3ports: # 端口- 2183:2181environment:ZOO_MY_ID: 3ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181volumes:- "/env/zk/zookeeper/conf3:/conf"- "/env/zk/zookeeper/data3:/data"- "/env/zk/zookeeper/datalog3:/datalog"deploy:mode: replicatedreplicas: 1 kafka1:image: wurstmeister/kafkadepends_on:- zoo1- zoo2- zoo3ports:- 9091:9092external_links:- zoo1- zoo2- zoo3hostname: kafka1environment:# 连接 zookeeper 集群KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181# brokerId 唯一KAFKA_BROKER_ID: 1# kafka 监听所有端口KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092# kafka默认只可以内网访问,也就是在 Docker 上创建的网络内部# 宿主机和 Java 客户端是访问不了的# 配置宿主机 IP 地址,不然宿主机访问不了KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hadoop01:9091volumes:- "/env/zk/kafka/run1/docker.sock:/var/run/docker.sock"- "/env/zk/kafka/kafka1/:/kafka"networks:- zookeeper_kafkaextra_hosts:# 配置容器内宿主机映射- hadoop01:192.168.10.11kafka2:image: wurstmeister/kafkadepends_on:- zoo1- zoo2- zoo3ports:- 9092:9092external_links:- zoo1- zoo2- zoo3hostname: kafka2environment:KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181KAFKA_BROKER_ID: 2KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hadoop01:9092volumes:- "/env/zk/kafka/run2/docker.sock:/var/run/docker.sock"- "/env/zk/kafka/kafka2/:/kafka"networks:- zookeeper_kafkaextra_hosts:- hadoop01:192.168.10.11kafka3:image: wurstmeister/kafkadepends_on:- zoo1- zoo2- zoo3ports:- 9093:9092external_links:- zoo1- zoo2- zoo3hostname: kafka3environment:KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181KAFKA_BROKER_ID: 3KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hadoop01:9093volumes:- "/env/zk/kafka/run3/docker.sock:/var/run/docker.sock"- "/env/zk/kafka/kafka3/:/kafka"networks:- zookeeper_kafkaextra_hosts:- hadoop01:192.168.10.11kafka-manager:image: hlebalbau/kafka-managerports:- "9010:9000"depends_on:- zoo1- zoo2- zoo3environment:ZK_HOSTS: "zoo1:2181,zoo2:2181,zoo3:2181"APPLICATION_SECRET: "random-secret"networks:- zookeeper_kafkanetworks:zookeeper_kafka:driver: overlay

在宿主机建立好相应目录文件,然后执行在宿主机建立好相应目录文件,然后执行在宿主机建立好相应目录文件,然后执行在宿主机建立好相应目录文件,然后执行在宿主机建立好相应目录文件,然后执行在宿主机建立好相应目录文件,然后执行docker stack deploy -c filename.yml zk_kafka

运行实例:

显示失败是因为 docker 抽风,配置 kafka 依赖 zookeeper 它还是先启动了 kafka 导致 kafka 运行失败,服务会自动重启,这不重要,反正都会跑起来!

测试

进入 kafka 目录

cd opt/kafka/bin/

创建 topic

./kafka-topics.sh --create --zookeeper zoo1:2181 --partitions 2 --replication-factor 2 --topic first

查看 topic

./kafka-topics.sh --list --zookeeper zoo2:2181

开启 kafka 生产者客户端

注意看这两条命令 一个是内网映射一个是外网映射效果相同

./kafka-console-producer.sh --broker-list kafka1:9092 --topic first./kafka-console-producer.sh --broker-list hadoop01:9091 --topic first

再开一个命令行

开启 kafka 消费者客户端

注意看这两条命令 一个是内网映射一个是外网映射效果相同

./kafka-console-consumer.sh --bootstrap-server kafka1:9092 --topic first --from-beginning./kafka-console-consumer.sh --bootstrap-server hadoop01:9093 --topic first --from-beginning

四条命令任意配对,都可以生产消费

运行图

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。