提示:文章写完后,目次可以自动天生,如何天生可参考右边的帮助文档
媒介
提示:由于服务器需求,需要安装消息队列kafka,之前是没有安装过的。用博客记载下这次安装
提示:以下是本篇文章正文内容,下面案例可供参考
一、kafka是什么?
Kafka 是一个开源的分布式流处理惩罚平台,最初由 LinkedIn 开发,并于2011年开源。它主要用于构建实时数据管道和流处理惩罚应用程序。Kafka 的设计目标是提供高吞吐量、低耽误和高可靠性,实用于处理惩罚大量数据的实时流。
二、安装步骤
1.解压文件到当前目次
代码如下(示例):
- root@hecs-349024:~# cd /usr/local/
- root@hecs-349024:/usr/local# mkdir kafka
- root@hecs-349024:/usr/local#
- root@hecs-349024:/usr/local#
- root@hecs-349024:/usr/local#
- root@hecs-349024:/usr/local#
- root@hecs-349024:/usr/local#
- root@hecs-349024:/usr/local# cd kafka/
- root@hecs-349024:/usr/local/kafka# ll
- total 8
- drwxr-xr-x 2 root root 4096 Jun 21 09:55 ./
- drwxr-xr-x 15 root root 4096 Jun 21 09:55 ../
- root@hecs-349024:/usr/local/kafka# tar -zxvf /root/kafka_2.13-3.7.0.tgz -C ./
复制代码 2.使用自定义的日志目次
代码如下(示例):
- drwxr-xr-x 8 root root 4096 Jun 21 09:59 ./
- drwxr-xr-x 3 root root 4096 Jun 21 09:56 ../
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 bin/
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 config/
- drwxr-xr-x 2 root root 12288 Jun 21 09:56 libs/
- -rw-r--r-- 1 root root 15125 Feb 9 21:25 LICENSE
- drwxr-xr-x 2 root root 4096 Feb 9 21:34 licenses/
- drwxr-xr-x 2 root root 4096 Jun 21 09:59 logs/
- -rw-r--r-- 1 root root 28359 Feb 9 21:25 NOTICE
- drwxr-xr-x 2 root root 4096 Feb 9 21:34 site-docs/
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# cd logs/
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/logs# ll
- total 8
- drwxr-xr-x 2 root root 4096 Jun 21 09:59 ./
- drwxr-xr-x 8 root root 4096 Jun 21 09:59 ../
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/logs# pwd
- /usr/local/kafka/kafka_2.13-3.7.0/logs
复制代码 这里需要记下我们的log目次地址,我这里的是 /usr/local/kafka/kafka_2.13-3.7.0/logs
3.更改默认的设置文件
之前网上的教程是设置kafka的时间是需要依赖zookeeper的,于是网上找了一下资料如下:
从Apache Kafka 2.8.0版本开始,不再需要ZooKeeper作为其元数据存储和管理体系。Kafka社区已经引入了一个新的Raft协议(Kafka Raft Metadata Quorum,或称KRaft),可以大概替换ZooKeeper来管理集群的元数据。
在使用KRaft的设置下,Kafka集群将直接受理自己需要的元数据信息,这种架构简化了部署,使得Kafka可以大概更容易地扩展和管理。然而,这个特性在Kafka 2.8.0版本中还是一个预览版,不建议在生产环境中使用。在后续的版本中,该特性会渐渐完善并稳固下来。
总结来说,如果你使用的是Kafka 2.8.0及其之后的版本,并且设置使用了KRaft模式,那么Kafka运行时就不再依赖ZooKeeper。但是,如果你使用的是Kafka 2.8.0之前的版本,或者没有启用KRaft模式,那么依然需要ZooKeeper来调和和管理集群元数据。
那么进入我们的文件夹看一下,果然是能看到Kafka这个目次的
- root@hecs-349024:~# cd /usr/local/kafka/kafka_2.13-3.7.0/
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# ll
- total 120
- drwxr-xr-x 8 root root 4096 Jun 21 10:07 ./
- drwxr-xr-x 3 root root 4096 Jun 21 09:56 ../
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 bin/
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 config/
- -rw-r--r-- 1 root root 35550 Jun 21 16:07 kafka.log
- drwxr-xr-x 2 root root 12288 Jun 21 09:56 libs/
- -rw-r--r-- 1 root root 15125 Feb 9 21:25 LICENSE
- drwxr-xr-x 2 root root 4096 Feb 9 21:34 licenses/
- drwxr-xr-x 3 root root 4096 Jun 21 17:06 logs/
- -rw-r--r-- 1 root root 28359 Feb 9 21:25 NOTICE
- drwxr-xr-x 2 root root 4096 Feb 9 21:34 site-docs/
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# cd config/
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/config# ll
- total 84
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 ./
- drwxr-xr-x 8 root root 4096 Jun 21 10:07 ../
- -rw-r--r-- 1 root root 906 Feb 9 21:25 connect-console-sink.properties
- -rw-r--r-- 1 root root 909 Feb 9 21:25 connect-console-source.properties
- -rw-r--r-- 1 root root 5475 Feb 9 21:25 connect-distributed.properties
- -rw-r--r-- 1 root root 883 Feb 9 21:25 connect-file-sink.properties
- -rw-r--r-- 1 root root 881 Feb 9 21:25 connect-file-source.properties
- -rw-r--r-- 1 root root 2063 Feb 9 21:25 connect-log4j.properties
- -rw-r--r-- 1 root root 2540 Feb 9 21:25 connect-mirror-maker.properties
- -rw-r--r-- 1 root root 2262 Feb 9 21:25 connect-standalone.properties
- -rw-r--r-- 1 root root 1221 Feb 9 21:25 consumer.properties
- drwxr-xr-x 2 root root 4096 Jun 21 10:03 kraft/
- -rw-r--r-- 1 root root 4917 Feb 9 21:25 log4j.properties
- -rw-r--r-- 1 root root 2065 Feb 9 21:25 producer.properties
- -rw-r--r-- 1 root root 6896 Feb 9 21:25 server.properties
- -rw-r--r-- 1 root root 1094 Feb 9 21:25 tools-log4j.properties
- -rw-r--r-- 1 root root 1169 Feb 9 21:25 trogdor.conf
- -rw-r--r-- 1 root root 1205 Feb 9 21:25 zookeeper.properties
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/config# cd kraft/
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/config/kraft# ll
- total 32
- drwxr-xr-x 2 root root 4096 Jun 21 10:03 ./
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 ../
- -rw-r--r-- 1 root root 6111 Jun 21 10:02 broker.properties
- -rw-r--r-- 1 root root 5736 Jun 21 10:03 controller.properties
- -rw-r--r-- 1 root root 6313 Jun 21 10:01 server.properties
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/config/kraft# pwd
- /usr/local/kafka/kafka_2.13-3.7.0/config/kraft
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/config/kraft#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0/config/kraft# ll
- total 32
- drwxr-xr-x 2 root root 4096 Jun 21 10:03 ./
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 ../
- -rw-r--r-- 1 root root 6111 Jun 21 10:02 broker.properties
- -rw-r--r-- 1 root root 5736 Jun 21 10:03 controller.properties
- -rw-r--r-- 1 root root 6313 Jun 21 10:01 server.properties
复制代码 可以看到在config目次下有kraft的目次,那么我们只需要设置kraft的目次即可
(broker、controller、server)三个目次

在broker和server的两个设置文件,我们要修改两处,其中log的地址是刚刚我们自定义的文件夹目次地址,localhost可以设置成我们需要的ip地址。
在controller中我们只需要设置日志的文件目次即可。不需要设置地址
4.完成启动前设置
设置KAFKA_CLUSTER_ID并长期化到存储目次中
KAFKA_CLUSTER_ID是Kafka集群的唯一标识符,用于标识一个特定的Kafka集群。每个Kafka集群都会有一个独一无二的CLUSTER_ID,这个ID是在Kafka集群启动时天生的,并且在整个生命周期中保持不变。
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.propertiesmetaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, dirs={/usr/local/kafka/kafka_2.13-3.7.0/logs: EMPTY})
- Formatting /usr/local/kafka/kafka_2.13-3.7.0/logs with metadata.version 3.7-IV4.
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# ^C
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# ^C
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# echo $KAFKA_CLUSTER_ID
- dFAmI9AKQVCd9ZD
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
复制代码 5.检查服务启动以及日志情况
需要注意的是我们启动的时间需要指定设置文件举行启动,记得我们是kafka集群模式,指定目次的时间肯定注意看清晰
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# nohup bin/kafka-server-start.sh config/kraft/server.properties > kafka.log 2>&1 &
- [1] 3508
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# jps
- 3923 Jps
- 3508 Kafka
- 16837 jar
- root@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0#
- oot@hecs-349024:/usr/local/kafka/kafka_2.13-3.7.0# ll
- total 120
- drwxr-xr-x 8 root root 4096 Jun 21 10:07 ./
- drwxr-xr-x 3 root root 4096 Jun 21 09:56 ../
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 bin/
- drwxr-xr-x 3 root root 4096 Feb 9 21:34 config/
- -rw-r--r-- 1 root root 35944 Jun 21 17:07 kafka.log
- drwxr-xr-x 2 root root 12288 Jun 21 09:56 libs/
- -rw-r--r-- 1 root root 15125 Feb 9 21:25 LICENSE
- drwxr-xr-x 2 root root 4096 Feb 9 21:34 licenses/
- drwxr-xr-x 3 root root 4096 Jun 21 17:22 logs/
- -rw-r--r-- 1 root root 28359 Feb 9 21:25 NOTICE
- drwxr-xr-x 2 root root 4096 Feb 9 21:34 site-docs/
复制代码 可以看到目次已经天生,最后我们看一下日志没题目就代表我们的安装部署启动已经乐成了
- [2024-06-21 10:07:09,460] INFO [BrokerLifecycleManager id=1] Successfully registered broker 1 with broker epoch 8 (kafka.server.BrokerLifecycleManager)
- [2024-06-21 10:07:09,461] INFO [BrokerServer id=1] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
- [2024-06-21 10:07:09,462] INFO [BrokerLifecycleManager id=1] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
- [2024-06-21 10:07:09,525] INFO [BrokerLifecycleManager id=1] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
- [2024-06-21 10:07:09,526] INFO [BrokerServer id=1] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
- [2024-06-21 10:07:09,527] INFO authorizerStart completed for endpoint PLAINTEXT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
- [2024-06-21 10:07:09,527] INFO [SocketServer listenerType=BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer)
复制代码 总结
以上就是我个人安装kafka的过程记载,希望可以对各人有帮助!
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |