k8s helm 部署高可用 kafka 集群

k8s 部署kafka

https://bitnami.com/stack/kafka/helm
https://artifacthub.io/packages/helm/bitnami/kafka

[root@k8s-master kafka]# pwd
cd /k8s/k8sYaml/kafka

# 部署kafka
helm install kafka -n mms .

# 卸载kafka
# helm uninstall kafka -n mms

部署打印日志:

[root@k8s-master kafka]# helm install kafka -n mms .
NAME: kafka
LAST DEPLOYED: Fri Oct 28 15:54:24 2022
NAMESPACE: mms
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 19.0.0
APP VERSION: 3.3.1

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.mms.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.mms.svc.cluster.local:9092
    kafka-1.kafka-headless.mms.svc.cluster.local:9092
    kafka-2.kafka-headless.mms.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.3.1-debian-11-r1 --namespace mms --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace mms -- bash

    PRODUCER:
        kafka-console-producer.sh \

            --broker-list kafka-0.kafka-headless.mms.svc.cluster.local:9092,kafka-1.kafka-headless.mms.svc.cluster.local:9092,kafka-2.kafka-headless.mms.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \

            --bootstrap-server kafka.mms.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

查看pod:

[root@k8s-master kafka]# kubectl get pods -n mms
NAME                                     READY   STATUS    RESTARTS   AGE
dolphinscheduler-alert-d547bc58f-zw6v4   2/2     Running   0          78m
dolphinscheduler-api-548b4b4c59-dq57j    2/2     Running   0          78m
dolphinscheduler-master-0                2/2     Running   0          78m
dolphinscheduler-master-1                2/2     Running   0          78m
dolphinscheduler-master-2                2/2     Running   0          78m
dolphinscheduler-worker-0                2/2     Running   0          78m
dolphinscheduler-worker-1                2/2     Running   0          78m
dolphinscheduler-worker-2                2/2     Running   0          78m
kafka-0                                  2/2     Running   0          52s
kafka-1                                  2/2     Running   0          52s
kafka-2                                  2/2     Running   0          52s
mysql-mms-69ff94c459-lgxzb               1/1     Running   0          24h
zk-0                                     1/1     Running   0          21h
zk-1                                     1/1     Running   0          21h
zk-2                                     1/1     Running   0          21h

调试

1. 查询、创建

进入pod

kubectl exec -it -n mms kafka-0 -- bash

进入pod中一个container

# kubectl exec -it -n mms kafka-1 -c kafka -- bash
# kubectl exec -it -n mms kafka-1 --container kafka -- bash
kubectl exec -it -n mms kafka-0 -- bash

创建topic(3分区+2副本)

# 示例(kafka < 2.2版本,使用 --zookeeper 连字符
# kafka-topics.sh --zookeeper 192.168.94.151:2181/kafka --create --topic test-topic --replication-factor 2 --partitions 3
# kafka-topics.sh --zookeeper  zk-cs:2181 --topic test001  --create --partitions 3 --replication-factor 2

kafka >= 2.2版本,使用 --bootstrap-server
kafka-topics.sh --bootstrap-server zk-cs:2181 --topic test001  --create --partitions 3 --replication-factor 2

安装kafka客户端:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.3.1-debian-11-r1 --namespace mms --command -- sleep infinity

进入客户端的pod内 并执行 kafka-console-producer.sh 作为测试的生产者发送消息

kubectl exec --tty -i kafka-client --namespace mms -- bash

kafka-console-producer.sh --broker-list kafka-0.kafka-headless.mms.svc.cluster.local:9092,kafka-1.kafka-headless.mms.svc.cluster.local:9092,kafka-2.kafka-headless.mms.svc.cluster.local:9092 --topic test-seldon

终端测试发送

kubectl exec -it kafka-0 -n mms -- kafka-console-producer.sh --broker-list kafka-0.kafka-headless.mms.svc.cluster.local:9092 --topic test

另外启动终端测试

kubectl exec -it kafka-0 -n mms -- kafka-console-consumer.sh --bootstrap-server kafka.mms.svc.cluster.local:9092 --topic test --from-beginning

列出topic;查询topic详情

kafka-topics.sh --list --bootstrap-server zk-cs:2181
kafka-topics.sh --list --zookeeper zookeeper:2181
kafka-topics.sh --zookeeper zookeeper:2181 --describe --topic test001

docker部署kafka

1、拉取kafka镜像

docker pull wurstmeister/kafka

2、运行kafka

docker run -d  --log-driver json-file --log-opt max-size=500m --log-opt max-file=2 --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.1.189:31812/kafka -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.190:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -v /etc/localtime:/etc/localtime wurstmeister/kafka
  • -e KAFKA_BROKER_ID=0 在kafka集群中,每个kafka都有一个BROKER_ID来区分自己
  • -e KAFKA_ZOOKEEPER_CONNECT=192.168.1.189:31812/kafka 配置zookeeper管理kafka的路径
  • -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.129:9092 把kafka的地址端口注册给zookeeper
  • -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 配置kafka的监听端口
  • -v /etc/localtime:/etc/localtime 容器时间同步虚拟机的时间

3 进入kafka容器

docker exec -it kafka /bin/bash

4、进入kafka的bin目录下:

cd  /opt/kafka_2.13-2.8.1/bin

5、创建一个新主题(test-kafka)来存储事件

./kafka-topics.sh --create --topic test-kafka --bootstrap-server localhost:9092

# ./kafka-topics.sh --create --zookeeper 192.168.1.189:31812 --replication-factor 2 --partitions 2 --topic partopic

相关文章:
K8S部署kafka集群+高可用配置
kafka-topics.sh脚本详解
K8S部署Kafka集群 - 部署笔记
docker 中安装kafka
SpringBoot(十二):SpringBoot整合Kafka

为者常成,行者常至