马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?立即注册
x
Zookeeper 搭建方式
- 单机模式:Zookeeper只运行在一台服务器上,适合测试环境
- 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例;
- 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble)
1、单机模式
1.1 安装
- # 获取安装包
- # 若服务器不能联网,则手动下载上传,官网:https://zookeeper.apache.org/releases.html
- [root@S-CentOS app]# curl -O https://dlcdn.apache.org/zookeeper/zookeeper-3.9.3/apache-zookeeper-3.9.3-bin.tar.gz
- # 解压
- # 如命令不存在则安装:yum intall -y tar
- [root@S-CentOS app]# tar -zxvf apache-zookeeper-3.9.3-bin.tar.gz -C /app/
- # 创建目录
- [root@S-CentOS app]# cd apache-zookeeper-3.9.3-bin && mkdir data logs
- # 修改配置
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# sed 's|/tmp/zookeeper|/app/apache-zookeeper-3.9.3-bin/data\ndataLogDir=/app/apache-zookeeper-3.9.3/logs|g' conf/zoo_sample.cfg > conf/zoo.cfg
复制代码 注意,从Zookeeper 3.6及以上版本开始,admin server默认会占用8080端口。admin server是Zookeeper提供的一个管理接口,用于实验一些管理任务,如检察集群状态、配置参数等。这个接口通常是通过Jetty服务器启动的,而Jetty服务器默认监听8080端口。因此,假如Zookeeper配置了admin server,并且没有更改其默认端口设置,那么它就会占用8080端口。
- echo "admin.serverPort=2180" >> /app/apache-zookeeper-3.9.3-bin/conf/zoo.cfg
复制代码 1.2 启动 ZK 服务器
- # 启动ZK
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# bin/zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /app/apache-zookeeper-3.9.3-bin/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- # 查看ZK进程
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# jps -l
- 143856 org.apache.zookeeper.server.quorum.QuorumPeerMain
- 143900 sun.tools.jps.Jps
- # 查看ZK状态
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# bin/zkServer.sh status
- ZooKeeper JMX enabled by default
- Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost.
- Mode: standalone
- # 停止ZK
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# bin/zkServer.sh stop
- ZooKeeper JMX enabled by default
- Using config: /app/apache-zookeeper-3.9.3-bin/bin/../conf/zoo.cfg
- Stopping zookeeper ... STOPPED
复制代码 假如在前台中运⾏以便检察服务器的输出:
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# bin/zkServer.sh start-foreground
复制代码 假如 ZK 启动失败:
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# bin/zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /app/apache-zookeeper-3.9.3-bin/bin/../conf/zoo.cfg
- Starting zookeeper ... FAILED TO START
复制代码 检察日志:
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# tail -f logs/zookeeper-root-server-rlkj-gw-ecsb-04.out
- 2022-02-10 18:59:55,422 [myid:] - INFO [main:QuorumPeerConfig@135] - Reading configuration from: /app/apache-zookeeper-3.9.3-bin/bin/../conf/zoo.cfg
- 2022-02-10 18:59:55,624 [myid:] - ERROR [main:ZooKeeperServerMain@79] - Unable to start AdminServer, exiting abnormally
- org.apache.zookeeper.server.admin.AdminServer$AdminServerException: Problem starting AdminServer on address 0.0.0.0, port 8080 and command URL /commands
- at org.apache.zookeeper.server.admin.JettyAdminServer.start(JettyAdminServer.java:107)
- at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:138)
- at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
- at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
- Caused by: java.io.IOException: Failed to bind to /0.0.0.0:8080
- at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
- at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
- at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
- at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231)
- at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
- at org.eclipse.jetty.server.Server.doStart(Server.java:385)
- at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
- at org.apache.zookeeper.server.admin.JettyAdminServer.start(JettyAdminServer.java:103)
- ... 5 more
- Caused by: java.net.BindException: Address already in use
- at sun.nio.ch.Net.bind0(Native Method)
- at sun.nio.ch.Net.bind(Net.java:438)
- at sun.nio.ch.Net.bind(Net.java:430)
- at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:225)
- at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
- at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
- ... 12 more
- Unable to start AdminServer, exiting abnormally
复制代码 分析:在3.5.5版本及以上,Zookeeper 提供了一个内嵌的Jetty容器来运行 AdminServer,默认占用的是 8080端口,AdminServer 主要是来检察 Zookeeper 的一些状态,假如机器上有其他程序(比如:Tomcat)占用了 8080 端口,也会导致 Starting zookeeper … FAILED TO START 的问题。
解决方案:
① 修改zoo.cfg,禁用adminServer
② 修改zoo.cfg,修改端标语
1.3 Zookeeper 客户端
- # 启动客户端
- [root@S-CentOS apache-zookeeper-3.9.3-bin]# bin/zkCli.sh
- Connecting to localhost:2181
- 2022-04-01 14:03:02,279 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.7-...
- 2022-04-01 14:03:02,282 [myid:] - INFO [main:Environment@109] - Client environment:host.name=rlkj-gw-ecsb-04
- 2022-04-01 14:03:02,282 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.8.0_91
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/app/jdk1.8.0_91/jre
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/app/zookeeper...
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/...
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA>
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:os.version=4.4.186-1.el7.elrepo.x86_64
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:user.name=appuser
- 2022-04-01 14:03:02,284 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/home/appuser
- 2022-04-01 14:03:02,285 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/app/zookeeper-3.5.7
- 2022-04-01 14:03:02,285 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=235MB
- 2022-04-01 14:03:02,286 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=241MB
- 2022-04-01 14:03:02,286 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=241MB
- 2022-04-01 14:03:02,289 [myid:] - INFO [main:ZooKeeper@868] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3f8f9dd6
- 2022-04-01 14:03:02,294 [myid:] - INFO [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
- 2022-04-01 14:03:02,300 [myid:] - INFO [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
- 2022-04-01 14:03:02,308 [myid:] - INFO [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
- Welcome to ZooKeeper!
- 2022-04-01 14:03:02,314 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
- JLine support is enabled
- 2022-04-01 14:03:02,365 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /127.0.0.1:48086, server: localhost/127.0.0.1:2181
- [zk: localhost:2181(CONNECTING) 0] 2022-04-01 14:03:02,412 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x104616124490000, negotiated timeout = 30000
- WATCHER::
- WatchedEvent state:SyncConnected type:None path:null
- # 退出客户端
- [zk: localhost:2181(CONNECTED) 5] quit
- WATCHER::
- WatchedEvent state:Closed type:None path:null
- 2022-04-01 14:43:43,222 [myid:] - INFO [main:ZooKeeper@1422] - Session: 0x104616124490000 closed
- 2022-04-01 14:43:43,222 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@524] - EventThread shut down for session: 0x104616124490000
复制代码 (1)日志消息:
[main:Environment@109]:告诉我们各种各样的环境变量的配置以及客户端使⽤了什么JAR包。
[main:ZooKeeper@868] - Initiating client connection:消息本⾝说明到底发⽣了什么,⽽额外的重要细节说明了客户端尝试毗连到客户端发送的毗连串localhost/127.0.0.1:2181中的⼀个服务器
[myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1394]:确认信息,说明客户端与本地的ZooKeeper服务器建⽴了TCP毗连。后⾯的⽇志信息确认了会话的建⽴,并告诉我们会话ID为:0x104616124490000。最后客户端库通过SyncConncted事件通知了应⽤。应⽤需要实现Watcher对象来处理惩罚这个事件。
(2)创建一个会话流程:
① 客户端启动程序来创建⼀个会话。
② 客户端尝试毗连到localhost/127.0.0.1:2181。
③ 客户端毗连乐成,服务器开始初始化这个新会话。
④ 会话初始化乐成完成。
⑤ 服务器向客户端发送⼀个SyncConnected事件。
1.4 ZK节点
- [zk: localhost:2181(CONNECTED) 0] ls /
- [zookeeper]
复制代码 如今znode树为空,除了节点/zookeeper之外,该节点内标记了ZooKeeper服务所需的元数据树。
(1)创建节点
- [zk: localhost:2181(CONNECTED) 1] create /workers ""
- Created /workers
- [zk: localhost:2181(CONNECTED) 2] ls /
- [workers, zookeeper]
复制代码 (2)删除节点
- [zk: localhost:2181(CONNECTED) 3] delete /workers
- [zk: localhost:2181(CONNECTED) 4] ls /
- [zookeeper]
复制代码 2、伪集群模式
2.1 创建三个server的数据目次和日志目次
以zoo1为例,别的两个服务重复操纵。
- [root@S-CentOS zookeeper-3.5.7]# mkdir -p pseudo/zoo1
- [root@S-CentOS zookeeper-3.5.7]# cd pseudo/zoo1
- [root@S-CentOS zoo1]# mkdir data logs conf
复制代码 2.2 配置服务器编号
其他服务分别输入2、3…
- [root@S-CentOS zoo1]# cd data
- [root@S-CentOS data]# echo 1 > myid
复制代码 2.3 修改配置文件
- [root@S-CentOS zookeeper-3.5.7]# cp conf/zoo_sample.cfg ../conf/zoo1.cfg
- [root@S-CentOS data]# vim ../conf/zoo1.cfg
复制代码 zoo1.cfg
dataDir=/app/zookeeper-3.5.7/pseudo/zoo1/data
dataLogDir=/app/zookeeper-3.5.7/pseudo/zoo1/logs
clientPort=2181
server.1=localhost:2881:3881
server.2=localhost:2882:3882
server.3=localhost:2883:3883
注意:server.1=localhost:2881:3881 之后不能有空格,否则会报错
配置参数解读
server.A=B:C 。
A 是一个数字,表示这个是第几号服务器;
集群模式下配置一个文件 myid,这个文件在 dataDir 目次下,这个文件里面有一个数据,就是 A 的值,Zookeeper 启动时读取此文件,拿到里面的数据与 zoo.cfg 里面的配置信息比较,从而判断到底是哪个 server。
B :是这个服务器的地址或主机名;
C :TCP端口,用于仲裁通讯,即这个服务器 Follower 与集群中的 Leader 服务器交换信息的端口;
D :TCP端口,用于群首选举,即是万一集群中的 Leader 服务器挂了,需要一个端口来重新举行选举,选出一个新的Leader,而这个端口就是用来实验选举时服务器相互通讯的端口
2.4 启动zookeeper实例
启动第一个服务器节点:
- [root@S-CentOS data]# cd /app/zookeeper-3.5.7/bin
- [root@S-CentOS bin]# ./zkServer.sh start /app/zookeeper-3.5.7/pseudo/zoo1/conf/zoo1.cfg
- ZooKeeper JMX enabled by default
- Using config: /app/zookeeper-3.5.7/pseudo/zoo1/conf/zoo1.cfg
- Starting zookeeper ... STARTED
复制代码 检察服务器日志记录:
… [myid:1] - INFO [QuorumPeer[myid=1]/…:2181 uorumPeer@670] - LOOKING … [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:FastLeaderElection@740] -
New election. My id = 1, proposed zxid=0x0 … [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 …, LOOKING (my state) … [myid:1] - WARN [WorkerSender[myid=1] uorumCnxManager@368] - Cannot open channel to 2 at election address /127.0.0.1:3334 Java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
这个服务器疯狂地尝试毗连到其他服务器,然后失败,假如我们启动另⼀个服务器
- [root@S-CentOS bin]# ./zkServer.sh start /app/zookeeper-3.5.7/pseudo/zoo1/conf/zoo2.cfg
- ZooKeeper JMX enabled by default
- Using config: /app/zookeeper-3.5.7/pseudo/zoo2/conf/zoo2.cfg
- Client port found: 2182. Client address: localhost.
- Mode: leader
复制代码 此时构成仲裁的法定⼈数,第⼆个服务器的⽇志记录zookeeper.out:
… [myid:2] - INFO [QuorumPeer[myid=2]/…:2182 eader@345] - LEADING - LEADER ELECTION TOOK - 279 … [myid:2] - INFO [QuorumPeer[myid=2]/…:2182:FileTxnSnapLog@240] - Snapshotting: 0x0 to ./data/version-2/snapshot.0
该⽇志指出服务器2已经被选举为群⾸。
此时服务器1的⽇志:
… [myid:1] - INFO [QuorumPeer[myid=1]/…:2181 uorumPeer@738] - FOLLOWING … [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:ZooKeeperServer@162] - Created server … … [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 212
服务器1 作为服务器2 的跟随者被激活。
此时具有了符正当定仲裁 (三分之⼆)的可⽤服务器,在如今服务开始可⽤。
我们如今需要配置客户端来毗连到服务上,毗连字符串需要列出全部组成服务的服务器host:port对。
对于这个例⼦,毗连串为"127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183"(我们包含第三个服务器的信息,纵然我们永久不启动它,由于这可以说明ZooKeeper⼀些有⽤的属性)。
使⽤zkCli.sh来访问集群:
- [root@S-CentOS bin]# ./zkCli.sh -server localhost:2181,localhost:2182,localhost:2183
复制代码 当毗连到服务器后,我们会看到以下形式的消息:
[myid:localhost:2182] - INFO [main-SendThread(localhost:2182):ClientCnxn$SendThread@1394] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2182, sessionid = 0x20461e0bb3a0000, negotiated timeout = 30000
注意⽇志消息中的端标语,在本例中的2182。
假如通过 Ctrl + C 来停⽌客户端并重启多次它,我们将会看到端标语在2181-2182之间往返变化。
我们也许还会注意到尝试2183端毗连失败的消息,之后乐成毗连到某⼀个服务器端的消息。
客户端以随机次序毗连到毗连串中的服务器,这样可以用ZooKeeper 来实现⼀个简单的负载均衡。不外,客户端⽆法指定优先选择的服务器来进⾏毗连。比方,假如我们有5个ZooKeeper服务器的⼀个集合,其中3个在美国西海岸,另外两个在美国东海岸,为了确保客户端只毗连到本地服务器上,我们可以使在东海岸客户端的毗连串中只出现东海岸的服务器, 在西海岸客户端的毗连串中只有西海岸的服务器。
2.5 报错
- org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Address unresolved: 10.200.202.41:3882
server.1=localhost:2881:3881 配置之后存在空格,去掉重启即可
3、集群模式
假定三台服务器ip分别为:192.168.10.11、192.168.10.12、192.168.10.13
摆设 ZK:
- # 获取安装包
- # 若服务器不能联网,则手动下载上传,官网:https://zookeeper.apache.org/releases.html
- [root@S-CentOS app]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz
- # 解压
- [root@S-CentOS app]# tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz
- # 重命名
- [root@S-CentOS app]# mv apache-zookeeper-3.5.7-bin zookeeper-3.5.7
- # 创建目录
- [root@S-CentOS app]# cd zookeeper-3.5.7
- [root@S-CentOS zookeeper-3.5.7]# mkdir data logs
- # 修改配置
- [root@S-CentOS zookeeper-3.5.7]# cd conf
- [root@S-CentOS conf]# cp zoo_sample.cfg zoo.cfg
- [root@S-CentOS conf]# vim zoo.cfg
复制代码 zoo.cfg 文件:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/app/zookeeper-3.5.7/data
dataLogDir=/app/zookeeper-3.5.7/logs
server.1=192.168.10.11:3188:3288
server.2=192.168.10.12:3188:3288
server.3=192.168.10.13:3188:3288
将 Zookeeper 拷贝至其他两台机器:
- scp -r /app/zookeeper-3.5.7 192.168.10.11:/app/zookeeper-3.5.7/
- scp -r /app/zookeeper-3.5.7 192.168.10.12:/app/zookeeper-3.5.7/
复制代码 每个节点的dataDir指定的目次下创建一个 myid 的文件:
- # 每个节点ID不同,192.168.10.11 -> 1、 192.168.10.12 -> 2、 192.168.10.13 -> 3
- echo 1 > /app/zookeeper-3.5.7/data/myid
复制代码 启动ZK服务(每个节点都启动)
- # 启动ZK
- [root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
复制代码 4、ZK 配置参数
(1)tickTime = 2000:通讯心跳时间,单元毫秒
Zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,每隔tickTime时间就会发送一个心跳;最小的session过期时间为2倍tickTime
(2)initLimit = 10:LF 初始通讯时限
Leader和Follower初始毗连时能容忍的最多心跳数(tickTime的数目)
Follower在启动过程中,会从Leader同步全部最新数据,然后确定自己能够对外服务的起始状态。
Leader允许Follower在 ( initLimit * tickTime ) 时间内完成这个工作
(3)syncLimit = 5:LF 同步通讯时限
Leader和Follower之间通讯时间假如高出syncLimit * tickTime,Leader以为Follwer下线了,从服务器列表中删除Follwer
(4)dataDir:保存存储快照文件snapshot的目次
默认的tmp目次,容易被Linux系统定期删除,所以一样平常不消默认的tmp目次。
默认情况下,事务日志也会存储在这里。建议同时配置参数dataLogDir, 事务日志的写性能直接影响zk性能
(5)clientPort = 2181:客户端毗连端口,即对外服务端口,通常不做修改
(6)maxClientCnxns:单个客户端与单台服务器之间的毗连数的限制,是ip级别的,默认是60
假如设置为0,那么表明不作任何限制
(7)autopurge.snapRetainCount:指定了需要保存的文件数目,默认是保存3个
这个参数和下面参数配合利用
(8)autopurge.purgeInterval:指定了清算频率,单元是小时
3.4.0及之后版本,ZK提供了自动清算事务日志和快照文件的功能
需要配置一个1或更大的整数,0表示不开启自动清算功能
(9)globalOutstandingLimit:最大请求堆积数,默认是1000
ZK运行的时间, 只管server已经没有空闲来处理惩罚更多的客户端请求了,但是还是允许客户端将请求提交到服务器上来,以提高吞吐性能。当然,为了防止Server内存溢出,这个请求堆积数还是需要限制下的
5、脚本
(1)伪集群
pseudoCluster.sh:
- #!/bin/bash
- case $1 in
- "start"){
- # 启动zookeeper服务器
- cd /app/zookeeper-3.5.7/bin
- ./zkServer.sh start ../pseudo/zoo1/zoo1.cfg
- ./zkServer.sh start ../pseudo/zoo2/zoo2.cfg
- ./zkServer.sh start ../pseudo/zoo3/zoo3.cfg
- };;
- "stop"){
- # 停止Zookeeper集群
- cd /app/zookeeper-3.5.7/bin
- ./zkServer.sh stop ../pseudo/zoo1/zoo1.cfg
- ./zkServer.sh stop ../pseudo/zoo2/zoo2.cfg
- ./zkServer.sh stop ../pseudo/zoo3/zoo3.cfg
- # 清除遗留数据
- cd ../pseudo
- rm -rf zoo1/data/*
- rm -rf zoo1/logs/*
- echo 1 > zoo1/data/myid
- rm -rf zoo2/data/*
- rm -rf zoo2/logs/*
- echo 2 > zoo2/data/myid
- rm -rf zoo3/data/*
- rm -rf zoo3/logs/*
- echo 3 > zoo3/data/myid
- };;
- "status"){
- cd /app/zookeeper-3.5.7/bin
- ./zkServer.sh status ../pseudo/zoo1/zoo1.cfg
- ./zkServer.sh status ../pseudo/zoo2/zoo2.cfg
- ./zkServer.sh status ../pseudo/zoo3/zoo3.cfg
- };;
- "client"){
- cd ../bin
- ./zkCli.sh -server localhost:2181,localhost:2182,localhost:2183
- };;
- *){
- printf "参数仅支持:start、stop、status、stop、client\n"
- };;
- esac
复制代码 (2)分布式集群
cluster.sh
- #!/bin/bash
- case $1 in
- "start"){
- for i in hadoop102 hadoop103 hadoop104
- do
- echo ---------- zookeeper $i 启动 ------------
- ssh $i "/app/zookeeper-3.5.7/bin/zkServer.sh start"
- done
- };;
- "stop"){
- for i in hadoop102 hadoop103 hadoop104
- do
- echo ---------- zookeeper $i 停止 ------------
- ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh stop"
- done
- };;
- "status"){
- for i in hadoop102 hadoop103 hadoop104
- do
- echo ---------- zookeeper $i 状态 ------------
- ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh status"
- done
- };;
- esac
复制代码 (3)脚本利用
- # 编辑脚本
- [root@S-CentOS zookeeper-3.5.7]# vim zkStop.sh
- # 增加脚本执行权限
- [root@S-CentOS zookeeper-3.5.7]# chmod u+x zkStop.sh
- # 启动集群
- [root@S-CentOS zookeeper-3.5.7]# zkStop.sh start
- # 停止集群
- [root@S-CentOS zookeeper-3.5.7]# zkStop.sh stop
复制代码 6、xsync 同步脚本
https://blog.csdn.net/nalw2012/article/details/98322637
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |