1 . 在根目录创建zookeeper文件夹(service1、service2、service3都创建) |
[root @localhost /]# mkdir zookeeper |
通过Xshell上传文件到service1服务器:上传zookeeper- 3.4 . 6 .tar.gz到/software文件夹 |
2 .远程copy将service1下的/software/zookeeper- 3.4 . 6 .tar.gz到service2、service3 |
[root @localhost software]# scp -r /software/zookeeper- 3.4 . 6 .tar.gz root @192 .168. 2.212 :/software/ |
[root @localhost software]# scp -r /software/zookeeper- 3.4 . 6 .tar.gz root @192 .168. 2.213 :/software/ |
3 .copy /software/zookeeper- 3.4 . 6 .tar.gz到/zookeeper/目录(service1、service2、service3都执行) |
[root @localhost software]# cp /software/zookeeper- 3.4 . 6 .tar.gz /zookeeper/ |
4 .安装解压zookeeper- 3.4 . 6 .tar.gz(service1、service2、service3都执行) |
[root @localhost /]# cd /zookeeper/ |
[root @localhost zookeeper]# tar -zxvf zookeeper- 3.4 . 6 .tar.gz |
5 .在/zookeeper创建两个目录:zkdata、zkdatalog(service1、service2、service3都创建) |
[root @localhost zookeeper]# mkdir zkdata |
[root @localhost zookeeper]# mkdir zkdatalog |
6 .进入/zookeeper/zookeeper- 3.4 . 6 /conf/目录 |
[root @localhost zookeeper]# cd /zookeeper/zookeeper- 3.4 . 6 /conf/ |
[root @localhost conf]# ls |
configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg |
7 . 修改zoo.cfg文件 |
# The number of milliseconds of each tick |
tickTime= 2000 |
# The number of ticks that the initial |
# synchronization phase can take |
initLimit= 10 |
# The number of ticks that can pass between |
# sending a request and getting an acknowledgement |
syncLimit= 5 |
# the directory where the snapshot is stored. |
# do not use /tmp for storage, /tmp here is just |
# example sakes. |
dataDir=/zookeeper/zkdata |
dataLogDir=/zookeeper/zkdatalog |
# the port at which the clients will connect |
clientPort= 2181 |
# the maximum number of client connections. |
# increase this if you need to handle more clients |
#maxClientCnxns= 60 |
# |
# Be sure to read the maintenance section of the |
# administrator guide before turning on autopurge. |
# |
# |
# |
# The number of snapshots to retain in dataDir |
#autopurge.snapRetainCount= 3 |
# Purge task interval in hours |
# Set to "0" to disable auto purge feature |
#autopurge.purgeInterval= 1 |
server. 1 = 192.168 . 2.211 : 12888 : 13888 |
server. 2 = 192.168 . 2.212 : 12888 : 13888 |
server. 3 = 192.168 . 2.213 : 12888 : 13888 |
8 . 同步修改service2、service3的zoo.cfg配置 |
9 . myid文件写入(进入/zookeeper/zkdata目录下) |
[root @localhost /]# cd /zookeeper/zkdata |
[root @localhost /]# echo 1 > myid |
10 . myid文件写入service2、service3 |
echo 2 > myid |
echo 3 > myid |
11 .查看zk命令: |
[root @localhost ~]# cd /zookeeper/zookeeper- 3.4 . 6 /bin/ |
[root @localhost bin]# ls |
README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zookeeper.out |
12 .执行zkServer.sh查看详细命令: |
[root @localhost bin]# ./zkServer.sh |
JMX enabled by default |
Using config: /zookeeper/zookeeper- 3.4 . 6 /bin/../conf/zoo.cfg |
Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd} |
13 . 在service1、service2、service3分别启动zk服务 |
[root @localhost bin]# ./zkServer.sh start |
14 . jps查看zk进程 |
[root @localhost bin]# jps |
31483 QuorumPeerMain |
31664 Jps |
15 . 分别在service1、service2、service3查看zk状态(可以看到leader和follower节点) |
[root @localhost bin]# ./zkServer.sh status |
JMX enabled by default |
Using config: /zookeeper/zookeeper- 3.4 . 6 /bin/../conf/zoo.cfg |
Mode: follower |
[root @localhost bin]# ./zkServer.sh status |
JMX enabled by default |
Using config: /zookeeper/zookeeper- 3.4 . 6 /bin/../conf/zoo.cfg |
Mode: leader |
16 . 看到leader和follower节点已经安装成功 |
分布式的一些解决方案,有愿意了解的朋友可以找我们团队探讨 。 |
愿意了解框架技术或者源码的朋友直接加求求:贰零四贰八四九贰叁柒 |