Dokcer部署Kafka集群

打印 上一主题 下一主题

主题 863|帖子 863|积分 2589

docker网络规划
  1. docker network create kafka-net --subnet 172.20.0.0/16
  2. docker network ls
复制代码

  • zookeeper1(172.20.0.11  2184:2181)
  • zookeeper2(172.20.0.12  2185:2181)
  • zookeeper3(172.20.0.13  2186:2181)
  • kafka(172.20.0.14  内部9093:9093,外部9193:9193)
  • kafka(172.20.0.15  内部9094:9094,外部9194:9194)
  • kafka(172.20.0.16  内部9095:9095,外部9195:9195)
  • kafka manager(172.20.0.10  9000:9000)
部署中的配置和授权认证文件制作

准备一下两个文件,他们的位置可以放到恣意地方,只须要镜像部署的配置文件中能引用到即可。

  • 新建一个zookeeper和kafka共用的授权认证文件:server_jass.conf。按照本教程发起放到/root/kafka/kafka-sasl/server_jass.conf
  1. Client {
  2.     org.apache.zookeeper.server.auth.DigestLoginModule required
  3.     username="test"
  4.     password="test@QWER";
  5. };
  6. Server {
  7.     org.apache.zookeeper.server.auth.DigestLoginModule required
  8.     username="test"
  9.     password="test@QWER"  
  10.     user_admin="test@QWER"
  11.     user_test="test@QWER"; # 账号是test,密码是test@QWER
  12. };
  13. KafkaServer {
  14.     org.apache.kafka.common.security.plain.PlainLoginModule required
  15.     username="test"
  16.     password="test@QWER"
  17.     user_test="test@QWER";
  18. };
  19. KafkaClient {
  20.     org.apache.kafka.common.security.plain.PlainLoginModule required
  21.     username="test"
  22.     password="test@QWER";
  23. };
复制代码

  • 新建一个kafka-run-class脚本文件,规避JMX冲突:kafka-run-class.sh。按照本教程发起放到/root/kafka/kafka-run-class.sh
  1. #!/bin/bash
  2. # Licensed to the Apache Software Foundation (ASF) under one or more
  3. # contributor license agreements.  See the NOTICE file distributed with
  4. # this work for additional information regarding copyright ownership.
  5. # The ASF licenses this file to You under the Apache License, Version 2.0
  6. # (the "License"); you may not use this file except in compliance with
  7. # the License.  You may obtain a copy of the License at
  8. #
  9. #    http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. if [ $# -lt 1 ];
  17. then
  18.   echo "USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]"
  19.   exit 1
  20. fi
  21. # CYGWIN == 1 if Cygwin is detected, else 0.
  22. if [[ $(uname -a) =~ "CYGWIN" ]]; then
  23.   CYGWIN=1
  24. else
  25.   CYGWIN=0
  26. fi
  27. if [ -z "$INCLUDE_TEST_JARS" ]; then
  28.   INCLUDE_TEST_JARS=false
  29. fi
  30. # Exclude jars not necessary for running commands.
  31. regex="(-(test|test-sources|src|scaladoc|javadoc)\.jar|jar.asc)$"
  32. should_include_file() {
  33.   if [ "$INCLUDE_TEST_JARS" = true ]; then
  34.     return 0
  35.   fi
  36.   file=$1
  37.   if [ -z "$(echo "$file" | egrep "$regex")" ] ; then
  38.     return 0
  39.   else
  40.     return 1
  41.   fi
  42. }
  43. base_dir=$(dirname $0)/..
  44. if [ -z "$SCALA_VERSION" ]; then
  45.   SCALA_VERSION=2.13.5
  46.   if [[ -f "$base_dir/gradle.properties" ]]; then
  47.     SCALA_VERSION=`grep "^scalaVersion=" "$base_dir/gradle.properties" | cut -d= -f 2`
  48.   fi
  49. fi
  50. if [ -z "$SCALA_BINARY_VERSION" ]; then
  51.   SCALA_BINARY_VERSION=$(echo $SCALA_VERSION | cut -f 1-2 -d '.')
  52. fi
  53. # run ./gradlew copyDependantLibs to get all dependant jars in a local dir
  54. shopt -s nullglob
  55. if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  56.   for dir in "$base_dir"/core/build/dependant-libs-${SCALA_VERSION}*;
  57.   do
  58.     CLASSPATH="$CLASSPATH:$dir/*"
  59.   done
  60. fi
  61. for file in "$base_dir"/examples/build/libs/kafka-examples*.jar;
  62. do
  63.   if should_include_file "$file"; then
  64.     CLASSPATH="$CLASSPATH":"$file"
  65.   fi
  66. done
  67. if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  68.   clients_lib_dir=$(dirname $0)/../clients/build/libs
  69.   streams_lib_dir=$(dirname $0)/../streams/build/libs
  70.   streams_dependant_clients_lib_dir=$(dirname $0)/../streams/build/dependant-libs-${SCALA_VERSION}
  71. else
  72.   clients_lib_dir=/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs
  73.   streams_lib_dir=$clients_lib_dir
  74.   streams_dependant_clients_lib_dir=$streams_lib_dir
  75. fi
  76. for file in "$clients_lib_dir"/kafka-clients*.jar;
  77. do
  78.   if should_include_file "$file"; then
  79.     CLASSPATH="$CLASSPATH":"$file"
  80.   fi
  81. done
  82. for file in "$streams_lib_dir"/kafka-streams*.jar;
  83. do
  84.   if should_include_file "$file"; then
  85.     CLASSPATH="$CLASSPATH":"$file"
  86.   fi
  87. done
  88. if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  89.   for file in "$base_dir"/streams/examples/build/libs/kafka-streams-examples*.jar;
  90.   do
  91.     if should_include_file "$file"; then
  92.       CLASSPATH="$CLASSPATH":"$file"
  93.     fi
  94.   done
  95. else
  96.   VERSION_NO_DOTS=`echo $UPGRADE_KAFKA_STREAMS_TEST_VERSION | sed 's/\.//g'`
  97.   SHORT_VERSION_NO_DOTS=${VERSION_NO_DOTS:0:((${#VERSION_NO_DOTS} - 1))} # remove last char, ie, bug-fix number
  98.   for file in "$base_dir"/streams/upgrade-system-tests-$SHORT_VERSION_NO_DOTS/build/libs/kafka-streams-upgrade-system-tests*.jar;
  99.   do
  100.     if should_include_file "$file"; then
  101.       CLASSPATH="$file":"$CLASSPATH"
  102.     fi
  103.   done
  104.   if [ "$SHORT_VERSION_NO_DOTS" = "0100" ]; then
  105.     CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.8.jar":"$CLASSPATH"
  106.     CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.6.jar":"$CLASSPATH"
  107.   fi
  108.   if [ "$SHORT_VERSION_NO_DOTS" = "0101" ]; then
  109.     CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.9.jar":"$CLASSPATH"
  110.     CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.8.jar":"$CLASSPATH"
  111.   fi
  112. fi
  113. for file in "$streams_dependant_clients_lib_dir"/rocksdb*.jar;
  114. do
  115.   CLASSPATH="$CLASSPATH":"$file"
  116. done
  117. for file in "$streams_dependant_clients_lib_dir"/*hamcrest*.jar;
  118. do
  119.   CLASSPATH="$CLASSPATH":"$file"
  120. done
  121. for file in "$base_dir"/shell/build/libs/kafka-shell*.jar;
  122. do
  123.   if should_include_file "$file"; then
  124.     CLASSPATH="$CLASSPATH":"$file"
  125.   fi
  126. done
  127. for dir in "$base_dir"/shell/build/dependant-libs-${SCALA_VERSION}*;
  128. do
  129.   CLASSPATH="$CLASSPATH:$dir/*"
  130. done
  131. for file in "$base_dir"/tools/build/libs/kafka-tools*.jar;
  132. do
  133.   if should_include_file "$file"; then
  134.     CLASSPATH="$CLASSPATH":"$file"
  135.   fi
  136. done
  137. for dir in "$base_dir"/tools/build/dependant-libs-${SCALA_VERSION}*;
  138. do
  139.   CLASSPATH="$CLASSPATH:$dir/*"
  140. done
  141. for cc_pkg in "api" "transforms" "runtime" "file" "mirror" "mirror-client" "json" "tools" "basic-auth-extension"
  142. do
  143.   for file in "$base_dir"/connect/${cc_pkg}/build/libs/connect-${cc_pkg}*.jar;
  144.   do
  145.     if should_include_file "$file"; then
  146.       CLASSPATH="$CLASSPATH":"$file"
  147.     fi
  148.   done
  149.   if [ -d "$base_dir/connect/${cc_pkg}/build/dependant-libs" ] ; then
  150.     CLASSPATH="$CLASSPATH:$base_dir/connect/${cc_pkg}/build/dependant-libs/*"
  151.   fi
  152. done
  153. # classpath addition for release
  154. for file in "$base_dir"/libs/*;
  155. do
  156.   if should_include_file "$file"; then
  157.     CLASSPATH="$CLASSPATH":"$file"
  158.   fi
  159. done
  160. for file in "$base_dir"/core/build/libs/kafka_${SCALA_BINARY_VERSION}*.jar;
  161. do
  162.   if should_include_file "$file"; then
  163.     CLASSPATH="$CLASSPATH":"$file"
  164.   fi
  165. done
  166. shopt -u nullglob
  167. if [ -z "$CLASSPATH" ] ; then
  168.   echo "Classpath is empty. Please build the project first e.g. by running './gradlew jar -PscalaVersion=$SCALA_VERSION'"
  169.   exit 1
  170. fi
  171. # JMX settings
  172. if [ -z "$KAFKA_JMX_OPTS" ]; then
  173.   KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.ssl=false "
  174. fi
  175. # JMX port to use
  176. ISKAFKASERVER="false"
  177. if [[ "$*" =~ "kafka.Kafka" ]]; then
  178.   ISKAFKASERVER="true"
  179. fi
  180. if [  $JMX_PORT ] && [ -z "$ISKAFKASERVER" ]; then
  181.   KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
  182. fi
  183. # Log directory to use
  184. if [ "x$LOG_DIR" = "x" ]; then
  185.   LOG_DIR="$base_dir/logs"
  186. fi
  187. # Log4j settings
  188. if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  189.   # Log to console. This is a tool.
  190.   LOG4J_DIR="$base_dir/config/tools-log4j.properties"
  191.   # If Cygwin is detected, LOG4J_DIR is converted to Windows format.
  192.   (( CYGWIN )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}")
  193.   KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
  194. else
  195.   # create logs directory
  196.   if [ ! -d "$LOG_DIR" ]; then
  197.     mkdir -p "$LOG_DIR"
  198.   fi
  199. fi
  200. # If Cygwin is detected, LOG_DIR is converted to Windows format.
  201. (( CYGWIN )) && LOG_DIR=$(cygpath --path --mixed "${LOG_DIR}")
  202. KAFKA_LOG4J_OPTS="-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS"
  203. # Generic jvm settings you want to add
  204. if [ -z "$KAFKA_OPTS" ]; then
  205.   KAFKA_OPTS=""
  206. fi
  207. # Set Debug options if enabled
  208. if [ "x$KAFKA_DEBUG" != "x" ]; then
  209.     # Use default ports
  210.     DEFAULT_JAVA_DEBUG_PORT="5005"
  211.     if [ -z "$JAVA_DEBUG_PORT" ]; then
  212.         JAVA_DEBUG_PORT="$DEFAULT_JAVA_DEBUG_PORT"
  213.     fi
  214.     # Use the defaults if JAVA_DEBUG_OPTS was not set
  215.     DEFAULT_JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=${DEBUG_SUSPEND_FLAG:-n},address=$JAVA_DEBUG_PORT"
  216.     if [ -z "$JAVA_DEBUG_OPTS" ]; then
  217.         JAVA_DEBUG_OPTS="$DEFAULT_JAVA_DEBUG_OPTS"
  218.     fi
  219.     echo "Enabling Java debug options: $JAVA_DEBUG_OPTS"
  220.     KAFKA_OPTS="$JAVA_DEBUG_OPTS $KAFKA_OPTS"
  221. fi
  222. # Which java to use
  223. if [ -z "$JAVA_HOME" ]; then
  224.   JAVA="java"
  225. else
  226.   JAVA="$JAVA_HOME/bin/java"
  227. fi
  228. # Memory options
  229. if [ -z "$KAFKA_HEAP_OPTS" ]; then
  230.   KAFKA_HEAP_OPTS="-Xmx256M"
  231. fi
  232. # JVM performance options
  233. # MaxInlineLevel=15 is the default since JDK 14 and can be removed once older JDKs are no longer supported
  234. if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  235.   KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true"
  236. fi
  237. while [ $# -gt 0 ]; do
  238.   COMMAND=$1
  239.   case $COMMAND in
  240.     -name)
  241.       DAEMON_NAME=$2
  242.       CONSOLE_OUTPUT_FILE=$LOG_DIR/$DAEMON_NAME.out
  243.       shift 2
  244.       ;;
  245.     -loggc)
  246.       if [ -z "$KAFKA_GC_LOG_OPTS" ]; then
  247.         GC_LOG_ENABLED="true"
  248.       fi
  249.       shift
  250.       ;;
  251.     -daemon)
  252.       DAEMON_MODE="true"
  253.       shift
  254.       ;;
  255.     *)
  256.       break
  257.       ;;
  258.   esac
  259. done
  260. # GC options
  261. GC_FILE_SUFFIX='-gc.log'
  262. GC_LOG_FILE_NAME=''
  263. if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then
  264.   GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX
  265.   # The first segment of the version number, which is '1' for releases before Java 9
  266.   # it then becomes '9', '10', ...
  267.   # Some examples of the first line of `java --version`:
  268.   # 8 -> java version "1.8.0_152"
  269.   # 9.0.4 -> java version "9.0.4"
  270.   # 10 -> java version "10" 2018-03-20
  271.   # 10.0.1 -> java version "10.0.1" 2018-04-17
  272.   # We need to match to the end of the line to prevent sed from printing the characters that do not match
  273.   JAVA_MAJOR_VERSION=$("$JAVA" -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
  274.   if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then
  275.     KAFKA_GC_LOG_OPTS="-Xlog:gc*:file=$LOG_DIR/$GC_LOG_FILE_NAME:time,tags:filecount=10,filesize=100M"
  276.   else
  277.     KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
  278.   fi
  279. fi
  280. # Remove a possible colon prefix from the classpath (happens at lines like `CLASSPATH="$CLASSPATH:$file"` when CLASSPATH is blank)
  281. # Syntax used on the right side is native Bash string manipulation; for more details see
  282. # http://tldp.org/LDP/abs/html/string-manipulation.html, specifically the section titled "Substring Removal"
  283. CLASSPATH=${CLASSPATH#:}
  284. # If Cygwin is detected, classpath is converted to Windows format.
  285. (( CYGWIN )) && CLASSPATH=$(cygpath --path --mixed "${CLASSPATH}")
  286. # Launch mode
  287. if [ "x$DAEMON_MODE" = "xtrue" ]; then
  288.   nohup "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
  289. else
  290.   exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"
  291. fi
复制代码
镜像部署


  • 新建zookeeper镜像文件:zk-docker-compose.yml
  1. services:
  2.   zook1:
  3.     image: zookeeper:latest
  4.     #restart: always #自动重新启动
  5.     hostname: zook1
  6.     container_name: zook1 #容器名称,方便在rancher中显示有意义的名称
  7.     ports:
  8.     - 2183:2181 #将本容器的zookeeper默认端口号映射出去
  9.     volumes: # 挂载数据卷 前面是宿主机即本机的目录位置,后面是docker的目录
  10.     - "/Users/konsy/Development/volume/zkcluster/zook1/data:/data"
  11.     - "/Users/konsy/Development/volume/zkcluster/zook1/datalog:/datalog"
  12.     - "/Users/konsy/Development/volume/zkcluster/zook1/logs:/logs"
  13.     - "/root/kafka/kafka-sasl/:/opt/zookeeper/secrets/" #映射账号密码配置文件
  14.     environment:
  15.         ZOO_MY_ID: 1  #即是zookeeper的节点值,也是kafka的brokerid值
  16.         ZOO_SERVERS: server.1=zook1:2888:3888;2181 server.2=zook2:2888:3888;2181 server.3=zook3:2888:3888;2181
  17.         ZOO_TLS_QUORUM_CLIENT_AUTH: need
  18.         SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper/secrets/server_jass.conf #指定账号密码配置文件地址
  19.     networks:
  20.         kafka-net:
  21.             ipv4_address: 172.20.0.11
  22.   zook2:   
  23.     image: zookeeper:latest
  24.     #restart: always #自动重新启动
  25.     hostname: zook2
  26.     container_name: zook2 #容器名称,方便在rancher中显示有意义的名称
  27.     ports:
  28.     - 2184:2181 #将本容器的zookeeper默认端口号映射出去
  29.     volumes:
  30.     - "/Users/konsy/Development/volume/zkcluster/zook2/data:/data"
  31.     - "/Users/konsy/Development/volume/zkcluster/zook2/datalog:/datalog"
  32.     - "/Users/konsy/Development/volume/zkcluster/zook2/logs:/logs"
  33.     - "/root/kafka/kafka-sasl/:/opt/zookeeper/secrets/"
  34.     environment:
  35.         ZOO_MY_ID: 2  #即是zookeeper的节点值,也是kafka的brokerid值
  36.         ZOO_SERVERS: server.1=zook1:2888:3888;2181 server.2=zook2:2888:3888;2181 server.3=zook3:2888:3888;2181
  37.         ZOO_TLS_QUORUM_CLIENT_AUTH: need
  38.         SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper/secrets/server_jass.conf
  39.     networks:
  40.         kafka-net:
  41.             ipv4_address: 172.20.0.12
  42.             
  43.   zook3:   
  44.     image: zookeeper:latest
  45.     #restart: always #自动重新启动
  46.     hostname: zook3
  47.     container_name: zook3 #容器名称,方便在rancher中显示有意义的名称
  48.     ports:
  49.     - 2185:2181 #将本容器的zookeeper默认端口号映射出去
  50.     volumes:
  51.     - "/Users/konsy/Development/volume/zkcluster/zook3/data:/data"
  52.     - "/Users/konsy/Development/volume/zkcluster/zook3/datalog:/datalog"
  53.     - "/Users/konsy/Development/volume/zkcluster/zook3/logs:/logs"
  54.     - "/root/kafka/kafka-sasl/:/opt/zookeeper/secrets/"
  55.     environment:
  56.         ZOO_MY_ID: 3  #即是zookeeper的节点值,也是kafka的brokerid值
  57.         ZOO_SERVERS: server.1=zook1:2888:3888;2181 server.2=zook2:2888:3888;2181 server.3=zook3:2888:3888;2181
  58.         ZOO_TLS_QUORUM_CLIENT_AUTH: need
  59.         SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper/secrets/server_jass.conf
  60.     networks:
  61.         kafka-net:
  62.             ipv4_address: 172.20.0.13
  63. networks:
  64.   kafka-net:
  65.     external: true
复制代码

  • 执行脚本部署zookeeper至Docker
  1. docker compose -p zookeeper -f ./zk-docker-compose.yml up -d
复制代码

  • 新建Kafka集群配置文件:kafka-docker-compose.yml
  1. services:
  2.   kafka1:
  3.     image: docker.io/wurstmeister/kafka
  4.     #restart: always #自动重新启动
  5.     hostname: 172.20.0.14
  6.     container_name: kafka1
  7.     ports:
  8.       - 9093:9093
  9.     volumes:
  10.       - /Users/konsy/Development/volume/kafka/kafka1/wurstmeister/kafka:/wurstmeister/kafka
  11.       - /Users/konsy/Development/volume/kafka/kafka1/kafka:/kafka
  12.       - /root/kafka/kafka-sasl/:/opt/kafka/secrets/
  13.       - /root/kafka/kafka-run-class.sh:/opt/kafka_2.13-2.8.1/bin/kafka-run-class.sh
  14.     environment:
  15.       KAFKA_BROKER_ID: 1
  16.       KAFKA_LISTENERS: PLAINTEXT://:9093
  17.       KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.198.131:9093
  18.       KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
  19.       KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
  20.       KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
  21.       KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
  22.       KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
  23.       KAFKA_ZOOKEEPER_CONNECT: zook1:2181,zook2:2181,zook3:2181
  24.       ALLOW_PLAINTEXT_LISTENER : 'yes'
  25.       JMX_PORT: 9999 #开放JMX监控端口,来监测集群数据
  26.       KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jass.conf
  27.     external_links:
  28.       - zook1
  29.       - zook2
  30.       - zook3
  31.     networks:
  32.       kafka-net:
  33.         ipv4_address: 172.20.0.14
  34.   kafka2:
  35.     image: docker.io/wurstmeister/kafka
  36.     #restart: always #自动重新启动
  37.     hostname: 172.20.0.15
  38.     container_name: kafka2
  39.     ports:
  40.       - 9094:9094
  41.     volumes:
  42.       - /Users/konsy/Development/volume/kafka/kafka1/wurstmeister/kafka:/wurstmeister/kafka
  43.       - /Users/konsy/Development/volume/kafka/kafka1/kafka:/kafka
  44.       - /root/kafka/kafka-sasl/:/opt/kafka/secrets/
  45.       - /root/kafka/kafka-run-class.sh:/opt/kafka_2.13-2.8.1/bin/kafka-run-class.sh
  46.     environment:
  47.       KAFKA_BROKER_ID: 2
  48.       KAFKA_LISTENERS: PLAINTEXT://:9094
  49.       KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.198.131:9094
  50.       KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
  51.       KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
  52.       KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
  53.       KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
  54.       KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
  55.       KAFKA_ZOOKEEPER_CONNECT: zook1:2181,zook2:2181,zook3:2181
  56.       ALLOW_PLAINTEXT_LISTENER : 'yes'
  57.       JMX_PORT: 9999 #开放JMX监控端口,来监测集群数据
  58.       KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jass.conf
  59.     external_links:
  60.       - zook1
  61.       - zook2
  62.       - zook3
  63.     networks:
  64.       kafka-net:
  65.         ipv4_address: 172.20.0.15
  66.   kafka3:
  67.     image: docker.io/wurstmeister/kafka
  68.     #restart: always #自动重新启动
  69.     hostname: 172.20.0.16
  70.     container_name: kafka3
  71.     ports:
  72.       - 9095:9095
  73.     volumes:
  74.       - /Users/konsy/Development/volume/kafka/kafka1/wurstmeister/kafka:/wurstmeister/kafka
  75.       - /Users/konsy/Development/volume/kafka/kafka1/kafka:/kafka
  76.       - /root/kafka/kafka-sasl/:/opt/kafka/secrets/
  77.       - /root/kafka/kafka-run-class.sh:/opt/kafka_2.13-2.8.1/bin/kafka-run-class.sh
  78.     environment:
  79.       KAFKA_BROKER_ID: 3
  80.       KAFKA_LISTENERS: PLAINTEXT://:9095
  81.       KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.198.131:9095
  82.       KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
  83.       KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
  84.       KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
  85.       KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
  86.       KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
  87.       KAFKA_ZOOKEEPER_CONNECT: zook1:2181,zook2:2181,zook3:2181
  88.       ALLOW_PLAINTEXT_LISTENER : 'yes'
  89.       JMX_PORT: 9999 #开放JMX监控端口,来监测集群数据
  90.       KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jass.conf
  91.     external_links:
  92.       - zook1
  93.       - zook2
  94.       - zook3
  95.     networks:
  96.       kafka-net:
  97.         ipv4_address: 172.20.0.16
  98. networks:
  99.   kafka-net:
  100.       external: true
复制代码

  • 执行脚本部署kafka至Docker
  1. docker compose -f ./kafka-docker-compose.yml up -d
复制代码

  • 新建kafka-manager配置文件:kafka-manager-docker-compose.yml
  1. services:
  2.   kafka-manager:
  3.     image: scjtqs/kafka-manager:latest
  4.     restart: always
  5.     hostname: kafka-manager
  6.     container_name: kafka-manager
  7.     ports:
  8.       - 9000:9000
  9.     external_links:  # 连接本compose文件以外的container
  10.       - zook1
  11.       - zook2
  12.       - zook3
  13.       - 172.20.0.14
  14.       - 172.20.0.15
  15.       - 172.20.0.16
  16.     environment:
  17.       ZK_HOSTS: zook1:2181,zook2:2181,zook3:2181
  18.       KAFKA_BROKERS: 172.20.0.14:9093,172.20.0.15:9094,172.20.0.16:9095
  19.       APPLICATION_SECRET: letmein
  20.       KM_ARGS: -Djava.net.preferIPv4Stack=true
  21.     networks:
  22.       kafka-net:
  23.         ipv4_address: 172.20.10.10
  24. networks:
  25.   kafka-net:
  26.     external: true
复制代码

  • 执行脚本部署kafka-manager至Docker
  1. docker compose -f ./kafka-manager-docker-compose.yml up -d
复制代码

  • 配置Cluster


免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

惊落一身雪

金牌会员
这个人很懒什么都没写!
快速回复 返回顶部 返回列表