100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > 基于docker搭建zookeeper集群 kafka集群

基于docker搭建zookeeper集群 kafka集群

时间:2022-12-10 20:52:58

相关推荐

基于docker搭建zookeeper集群 kafka集群

zookeeper集群搭建

/znicy/p/7717426.html #Docker中搭建zookeeper集群,昵称:zni.feng

/luotianshuai/p/5206662.html #真机搭建参数参考

-------------------------------------------------------------------------------------------

docker inspect docker.io/zookeeper

"Cmd": ["zkServer.sh","start-foreground"],"Volumes": {"/data": {}, "/datalog": {} }, "WorkingDir": "/zookeeper-3.4.10","Entrypoint": ["/docker-entrypoint.sh"], #由镜像信息看出,容器启动会运行这个脚本

此脚本内容:/znicy/p/7717426.html

由以上内容,创建启动zookeeper集群容器脚本:采用容器网络,zookeeper只需要内部之间互相通信协调,采用host网络会报错(目前没找到问题)

#!/bin/bash#Get zookeeper imagezkimage=`docker images | grep zookeeper | awk {'print $1'}`if [ -n "$zkimage" ]thenecho 'The zookeeper image is already existed.'elseecho 'Pull the latest zookeeper image.'docker pull zookeeperfi#Create network for zookeeper containerszknet=`docker network ls | grep yapi_net | awk {'print $2'}`if [ -n "$zknet" ]thenecho 'The zknetwork is already existed.'elseecho 'Create zknetwork.'docker network create --subnet 172.30.0.0/16 yapi_netfi#Start zookeeper clusterecho 'Start 3 zookeeper servers.'rm -rf /opt/zookeeper_1/data /opt/zookeeper_1/datalog /var/log/zookeeper_1/logrm -rf /opt/zookeeper_2/data /opt/zookeeper_2/datalog /var/log/zookeeper_2/logrm -rf /opt/zookeeper_3/data /opt/zookeeper_3/datalog /var/log/zookeeper_3/logmkdir -p /opt/zookeeper_1/data /opt/zookeeper_1/datalog /var/log/zookeeper_1/logmkdir -p /opt/zookeeper_2/data /opt/zookeeper_2/datalog /var/log/zookeeper_2/logmkdir -p /opt/zookeeper_3/data /opt/zookeeper_3/datalog /var/log/zookeeper_3/logZOO_SERVERS="server.1=zookeeper_1:2888:3888 server.2=zookeeper_2:2888:3888 server.3=zookeeper_3:2888:3888"docker run --network yapi_net --ip 172.30.0.31 -d --restart always -v /opt/zookeeper_1/data:/data -v /opt/zookeeper_1/datalog:/datalog -v /var/log/zookeeper_1/log:/logs -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=1 --name zookeeper_1 -p 2182:2181 docker.io/zookeeperdocker run --network yapi_net --ip 172.30.0.32 -d --restart always -v /opt/zookeeper_2/data:/data -v /opt/zookeeper_2/datalog:/datalog -v /var/log/zookeeper_2/log:/logs -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=2 --name zookeeper_2 -p 2183:2181 docker.io/zookeeperdocker run --network yapi_net --ip 172.30.0.33 -d --restart always -v /opt/zookeeper_3/data:/data -v /opt/zookeeper_3/datalog:/datalog -v /var/log/zookeeper_3/log:/logs -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=3 --name zookeeper_3 -p 2184:2181 docker.io/zookeeper#docker run --network host -d --restart always -v /opt/zookeeper_1/data:/data -v /opt/zookeeper_1/datalog:/datalog -v /var/log/zookeeper_1/log:/logs -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=1 --name zookeeper_1 -p 2182:2181 docker.io/zookeeper #docker run --network host -d --restart always -v /opt/zookeeper_2/data:/data -v /opt/zookeeper_2/datalog:/datalog -v /var/log/zookeeper_2/log:/logs -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=2 --name zookeeper_2 -p 2183:2181 docker.io/zookeeper #docker run --network host -d --restart always -v /opt/zookeeper_3/data:/data -v /opt/zookeeper_3/datalog:/datalog -v /var/log/zookeeper_3/log:/logs -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=3 --name zookeeper_3 -p 2184:2181 docker.io/zookeeper

执行代码即可

补充

1、(telnet ip port 验证时 显示拒绝链接 原因是配置文件不完整,所以要改变脚本的 环境变量,修改依据,Entrypoint),完整配置文件如下:

clientPort=2181dataDir=/datadataLogDir=/datalogtickTime=2000initLimit=5syncLimit=2autopurge.snapRetainCount=3autopurge.purgeInterval=0maxClientCnxns=60server.1=zookeeper_1:2888:3888server.2=zookeeper_2:2888:3888server.3=zookeeper_3:2888:3888

2、单机(非集群)的配置文件

clientPort=2181dataDir=/datadataLogDir=/datalogtickTime=2000initLimit=5syncLimit=2autopurge.snapRetainCount=3autopurge.purgeInterval=0maxClientCnxns=60

kafka集群

配置事项:

理论解析

Kafka的分区数:参考1 参考2

分区多的优点,理论上整个集群所能达到的吞吐量就越大。

分区不是越多越好,一、客户端/服务器端需要使用的内存就越多二、文件句柄的开销(ulimit -n)三、降低高可用性,因为要进行分区副本leader的选举

分区数 = Tt / max(Tp, Tc),(创建一个只有1个分区的topic,然后测试这个topic的producer吞吐量和consumer吞吐量。假设它们的值分别是Tp和Tc,单位可以是MB/s。然后假设总的目标吞吐量是Tt)

consumer线程数最好=分区数

listenersadvertised.listeners的区别:/600.html /articles/kafka-practice-16/

kafka原理深度剖析1存储+吞吐量解析 详解1详解2

Kafka配置解析:配置解析1 配置解析2

配置详细解析3(官方)

配置解析4

由开机启动脚本,编写代码。kafka采用host网络,要不别的机器不能发消息,因为ip是容器内部的ip

#!/bin/bash#Get kafka imagekfkimage=`docker images | grep 'docker.io/wurstmeister/kafka' | awk {'print $1'}`if [ -n "$kfkimage" ]thenecho 'The docker.io/wurstmeister/kafka is already existed.'elseecho 'Pull the image.'docker pull docker.io/wurstmeister/kafkafi#Create network for zookeeper containerskfknet=`docker network ls | grep yapi_net | awk {'print $2'}`if [ -n "$kfknet" ]thenecho 'The kfknetwork is already existed.'elseecho 'Create kfknetwork.'docker network create --subnet 172.30.0.0/16 yapi_netfi#Start 3 zookeeper clusterecho 'Start 3 kafka servers.'rm -rf /opt/kafka_1/logdatarm -rf /opt/kafka_2/logdatarm -rf /opt/kafka_3/logdatamkdir -p /opt/kafka_1/logdatamkdir -p /opt/kafka_2/logdatamkdir -p /opt/kafka_3/logdata#kafka ipkfk_1_ip='172.30.0.41'kfk_2_ip='172.30.0.42'kfk_3_ip='172.30.0.43'zk_jiqun_ip='172.30.0.31:2181'#zk_jiqun_ip='172.30.0.31:2181,172.30.0.32:2181,172.30.0.33:2181'#zk_jiqun_ip='192.168.0.128:2181'#docker run --restart always -d --name kafka_1 --network yapi_net --ip ${kfk_1_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.181.130:9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092 docker.io/wurstmeister/kafka#docker run --restart always -d --name kafka_2 --network yapi_net --ip ${kfk_2_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9093' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.181.130:9093' -v /opt/kafka_2/logdata:/kafka -p 9093:9092 docker.io/wurstmeister/kafka#docker run --restart always -d --name kafka_3 --network yapi_net --ip ${kfk_3_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9094' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.181.130:9094' -v /opt/kafka_3/logdata:/kafka -p 9094:9092 docker.io/wurstmeister/kafka#docker run --restart always -d --name kafka_1 --network yapi_net --ip ${kfk_1_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://172.30.0.41:9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092 05cef8845b3d #docker run --restart always -d --name kafka_2 --network yapi_net --ip ${kfk_2_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9093' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://172.30.0.42:9092' -v /opt/kafka_2/logdata:/kafka -p 9093:9092 05cef8845b3d#docker run --restart always -d --name kafka_3 --network yapi_net --ip ${kfk_3_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9094' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://172.30.0.43:9092' -v /opt/kafka_3/logdata:/kafka -p 9094:9092 05cef8845b3d#docker run --restart always -d --name kafka_1 --network yapi_net --ip ${kfk_1_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://'${kfk_1_ip}':9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092 docker.io/wurstmeister/kafka#docker run --restart always -d --name kafka_2 --network yapi_net --ip ${kfk_2_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://'${kfk_2_ip}':9092' -v /opt/kafka_2/logdata:/kafka -p 9093:9092 docker.io/wurstmeister/kafka#docker run --restart always -d --name kafka_3 --network yapi_net --ip ${kfk_3_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://'${kfk_3_ip}':9092' -v /opt/kafka_3/logdata:/kafka -p 9094:9092 docker.io/wurstmeister/kafkadocker run --restart always -d --name kafka_1 --network host -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.0.128:9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092 docker.io/wurstmeister/kafkadocker run --restart always -d --name kafka_2 --network host -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9093' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.0.128:9093' -v /opt/kafka_2/logdata:/kafka -p 9093:9092 docker.io/wurstmeister/kafkadocker run --restart always -d --name kafka_3 --network host -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9094' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.0.128:9094' -v /opt/kafka_3/logdata:/kafka -p 9094:9092 docker.io/wurstmeister/kafka

执行代码即可

kafka压力测试 kafka收发消息测试

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。