Zookeeper

Zookeeper

Working mechanism

Share pictures

Coordinating the operation of the entire framework; but in the background version;

Zookeeper is an open source distributed , The Apache project that provides coordination services for distributed applications.

Zookeeper=file system + notification mechanism;

Features:

Number of clusters They are all odd numbers; (3 and 4 have the same fault-tolerant mechanism (you can still run with a few machines) are the same, they are all 1; 4 are too resource-consuming)

Share pictures

Data structure

It is both a folder and a file, called znode;

Share a picture

Application: Synchronize data;

Unified naming service, unified configuration management, unified cluster management, dynamic online and offline server nodes, soft load balancing, etc.

source /etc/profile &&

Build a cluster

Download

https://zookeeper.apache.org/

1. Unzip to the specified directory [[email protected] software]$ tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/ will/opt/module/zookeeper-3.4.< span style="color: #800080;">10/The zoo_sample.cfg in the conf path is modified to zoo.cfg; 2. Change the name of the configuration file in the conf foldermv zoo_sample.cfg zoo.cfg
Open the zoo.cfg file and modify the dataDir path:
3. Edit zoo.cfg, configure datad ir datadir=/opt/module/zookeeper-3.4.10/ zkData is in/opt/module/zookeeper-3.4.10/Create a zkData folder on this directory [[email protected] zookeeper-3.4.10]$ mkdir zkData 4. Configure cluster machines, each machine is assigned A different Serverid server.1=hadoop101:2888:3888 server.2 =hadoop102:2888:3888 server.3=hadoop103:2888:3888 5. Create a new myid file in the zkData folder, the content is the serverid of the machine 6. Configured Zookeeper LogDIR: Configure bin/zkEnv.sh file ZOO_LOG_DIR= "."Change to a custom log directory /opt/module/zookeeper-3.4.10 /logs
Start:
7. bin/zkServer.sh start Check whether the process is started [[emailprotected] zookeeper-3.4.10]$ jps ##each This process is to provide services;   4020 Jps    4001 QuorumPeerMain View status: [[emailprotected] zookeeper-3.4.10]$ bin/zkServer.sh status Start the client: [[email protected] zookeeper-3.4.10]$ bin/zkCli.sh ##Client to connect to the cluster;
< /span>

  WATCHER::

  WatchedEvent state:SyncConnected type:None path:null
  [zk: localhost:2181(CONNECTED) 0]< /p>

  [[email protected] zookeeper-3.4.10]$ jps
    3248 ZooKeeperMain ##服务端
    3076 QuorumPeerMain ##Run client
  /p>

 Exit the client: [zk: localhost:2181(CONNECTED) 0] quit
Stop Zookeeper
[[email protected] zookeeper-3.4.10]$ bin/zkServer.sh stop

Save data+If the notification mechanism kills the server, it will refuse to connect; the default connection is local; if the server is specified at startup, kill its server, it will jump directly; [ zk: localhost:2181(CONNECTED) 1] ls /zookeeper ##This is the node that comes with zookeeper; [quota] [zk: localhost:< span style="color: #800080;">2181(CONNECTED) 2] ls / watch [test2, test40000000004, zookeeper, test1] Create on another client: [zk: localhost:2181(CONNECTED)  0] create /data1 "heihei" ##When creating a node, you must tell it what the data is Created /data1 [zk: localhost:2181(CONNECTED) 3 ] WATCHER:: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/ Create a second node and nothing It reacted; the observation mechanism of zookeeper is effective once; because each node has a watchingList, the server has the ability to observe at this time;ls / Watch registration observes the root directory, the client will apply, zookeeper keeps the data in the root directory’s watchingList; if the watchingList changes, it will notify all registrations Client , Every notification will be crossed out from the list; common creation-s contains sequence-e Temporary (restart or timeout disappears) [zk: localhost:2181(CONNECTED)  2] create -s /data2 1235 Created /data20000000006 [zk: localhost:2181(CONNECTED)  3] create -e /linshi 001 Created /linshi [zk: localhost:2181(CONNECTED)  6] quit ### Quitting... 2019-01-27 00 :29:27,946 [myid:] -INFO [main:[email protected]684]-Session: 0x1688adaecec0001 closed 2019-01- 27 00:29 span>:27,954 [myid:]-INFO [main-EventThread:[ email protected]519]-EventThread shut down for session: 0x1688adaecec0001 Different clients are independent of each other, only the node created by oneself quit, and this node on another client (Within 2s) disappeared; ===> There are 4 types of nodes: orderly and persistent-s; [zk: localhost:2181(CONNECTED) 1] create -s /order 111 Created /order0000000009 orderly and short-s- e; [zk: localhost:2181(CONNECTED) 2] create -s -e /orderAndShort 222 Created /orderAndShort0000000010 disorderly and persistent; [zk: localhost:2181(CONNECTED) < span style="color: #800080;">3] create /long 333< /span> Created /long Disorderly short-e; [zk: localhost: 2181(CONNECTED) 4] create -e /short 444 Created /short< /span>

[zk: localhost:2181(CONNECTED) 2] get /test2 watch ##At this time, what is monitored is the content of the node; and ls is the path change of the monitoring node; abcd cZxid = 0x200000003 ctime = Sat Jan 26 15:23:14 CST 2019 mZxid = 0x200000003 span> mtim e = Sat Jan 26 15:23:14 CST 2019 pZxid = 0x400000015 cversion = < span style="color: #800080;">1 dataVersion = 0 aclVersion = 0 ephemeralOwner < /span>= 0x0 dataLength = 4 numChildren = 1 [zk: localhost:2181(CONNECTED) 3] WATCHER:: WatchedEvent state:SyncConnected type:NodeDataChanged path:/test2 [zk: localhost: 2181(CONNECTED) 4] create /test2/test0 123 ##Change the path of the node and create a child node; the value of the byte point has not changed; Created /test2/test0 [zk: localhost:2181(CONNECTED) 5] set /test2 QQ ##set is to change the value of the node cZxid = 0x200000003 ctime = Sat Jan 26 15:23:14 CST 2019 mZxid = 0x400000016 mtime = Sun Jan 27 00:53:19 CST 2019 pZxid = 0x400000015 span> cversion = 1 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 2 numChildren = 1 [zk: localhost:2181(CONNECTED) 6] stat /test2 cZxid = 0x200000003 ## indicates the number Second operation node; 2 shows the second startup of the server; 00000003 The first few operations (hexadecimal) under the current startup created this node; ctime = Sat Jan 26 15:23:14 CST 2019 ##Creation time, long timestamp ; MZxid = 0x400000016 ##The 00000016 modification of the table server at the 4th start The data of this node; mtime = Sun Jan 27 00:< span style="color: #800080;">53:19 CST 2019 #Modification time pZxid = 0x400000015 ##The table server created a child node at the 00000015th time when it was started for the 4th time; cversion = 1 dataVersion = 1< span style="color: #000000;"> #Modified data aclVersion = 0 #access control listAccess control list, access control of the network version; control who can access the node on the network; version 0 is acl ephemeralOwner = 0x0  #If it is a temporary node, 0x0 here will not be 0; if it is a temporary node, it will show the owner; when your owner goes offline, it It disappeared naturally; dataLength = 2 #data length numChildren = < span style="color: #800080;">1 #The number of child nodes; [zk: localhost:2181(CONNECTED) 1] create -e /testXXX 123 Created /testXXX [zk: localhost:2181(CONNECTED) 2] stat /testXXX cZxid = 0x400000019 ctime = Sun Jan 27 01:19: 06 CST 2019 mZxid  span>= 0x400000019 mtime = Sun Jan 27 01:19:06 CST 2019 pZxid = 0x400000019 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x1688adaecec0006 span> #sessionid = 0x1688adaecec0006 The session id between client and server; dataLength = < span style="color: #800080;">3 numChildren = 0 [zk: localhost:2181(CONNECTED) 3] delete deletes no child nodes; rmr can delete child nodes; rmr /test2 

Use ssh to start the cluster:

[[emailprotected] zookeeper-3.4.10]$ ssh hadoop103 /opt/module/zookeeper-3.4.10/bin/zkServer.sh status 
ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg Error contacting service. It is probably not running. ### Note that there must be a space in the source [[email protected] zookeeper-3.4.10]$ ssh hadoop103 source /etc/profile && /opt/module/zookeeper-3.4.10/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.10/bin/ ../conf/zoo.cfg Mode: follower [[email protected] zookeeper-3.4.10] $ hadoop 103 To configure the environment variables of JAVA_HOME:: #JAVA_HOME export JAVA_HOME=/opt/module/jdk1.8.0_144 export PATH=$ PATH:$JAVA_HOME/bin [[email protected] zookeeper-3.4.10]$ ssh hadoop103 /opt/module/zookeeper-3.4.10/bin/< span style="color: #000000;">zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower< /span>

API application

public class Zookeeper{


private ZooKeeper zkClient;
public static final String CONNECT_STRING = "hadoop101:2181,hadoop102:2181,hadoop103:2181";
public static final int SESSION_TIMEOUT = 2000;
@Before
public void before() throws IOException {
//cluster address; session expiration time; anonymous inner class, callback Function; The main process will not stop, and the root node will be notified through the callback function.
zkClient = new ZooKeeper(CONNECT_STRING, SESSION_TIMEOUT, new Watcher() { //When creating a ZooKeeper client
public void process(WatchedEvent event) {
System.out.println(
"default callback function"); //No node is monitored, write or not write
}
});
}
//Create
@Test
public void create() throws KeeperException, InterruptedException {
String s
= zkClient.create("/APITest", "123".getBytes(),
ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
//node Permission access; Temporary ordered node
System.out.println(s);
Thread.sleep(Long.MAX_VALUE);
}
//recursive --> can be called repeatedly
@Test
public void getChildren() throws KeeperException, InterruptedException {
List
children = zkClient.getChildren("/", new Watcher() {//watch: true will monitor, call the default callback function, and monitor once is effective;
// You can also write new Watch without the default callback function ;
public void process(WatchedEvent watchedEvent) {
try {
System.out.println(
"Own callback function");
getChildren();
//Can monitor repeatedly; monitor the root directory watcher .process(pair.event);
} catch (KeeperException e) {
e.printStackTrace();
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
});
for (String child: children) {
System.out.println(child);
}
System.out.println(
"==================");
}
@Test
//Can be called repeatedly and monitored repeatedly
public void testGet() throws KeeperException, InterruptedException {
getChildren();
Thread.sleep(Long.MAX_VALUE);
//The main thread is Blocking, indicating that the callback is not the main thread

}
// Determine whether znode exists
@Test
public void exist() throws KeeperException, InterruptedException {
Stat stat
= zkClient.exists("/zookeeper1", false) ;
if (stat == null){
System.out.println(
"Node does not exist");
}
else{
System.out.println(stat.getDataLength());
}
}
}

The principle of the listener

share picture

ClientCnxn.java: sendThread = new SendThread(clientCnxnSocket); //connect is sendThread responsible for network connection communication; eventThread = new EventThread();        //listener is eventThread responsible for monitoring. Two child threads are created here; class SendThread extends ZooKeeperThread public class ZooKeeperThread extends Thread public  void start() {sendThread.start(); The thread that sends information to zookeeper by the client; eventThread.start(); is notified by zookeeper changes eventThread is responsible for receiving event changes; eventThread is responsible for the callback function called. Zookeeper changes it and sends this change to eventThread }

< /p>

Paxos algorithm

A consensus algorithm based on message passing and highly fault-tolerant; The principle of majority;

The message transmission has a sequence, and data synchronization is difficult to achieve;

ZAB protocol (Paxos algorithm in Zookeeper Implementation in)

Zookeeper–Atomic-Broadcast

How does Zookeeper ensure the global consistency of data? Through ZAB protocol

ZAB protocol: crash recovery; normal execution of data writing;

no leader elects the leader; if there is a leader, just work;

election mechanism

p>

1) Half mechanism: More than half of the machines in the cluster survive and the cluster is available. So Zookeeper is suitable for installing an odd number of servers.

2) Although Zookeeper does not specify Master and Slave in the configuration file. However, when Zookeeper works, one node is Leader, and the others are Followers. Leaders are temporarily generated through an internal election mechanism.

3) Take a simple example to illustrate the entire election process.

Assume that there is a Zookeeper cluster composed of five servers, their ids are from 1-5, and they are all newly started, that is, there is no historical data, and the amount of data stored is the same. of. Assume that these servers are started in order

share pictures

( 1) Server 1 starts and initiates an election. Server 1 votes for itself. At this time, server 1 has one vote, which is less than half of the votes (3 votes). The election cannot be completed, and the status of server 1 remains LOOKING;

(2) Server 2 is started, and another election is initiated. Servers 1 and 2 respectively vote for themselves and exchange ballot information: At this time, server 1 finds that the ID of server 2 is larger than the one currently voted by (server 1), and changes the vote to recommend server 2. At this time, server 1 has 0 votes and server 2 has 2 votes. There is no more than half of the results. The election cannot be completed. Servers 1 and 2 remain in the state of LOOKING

(3) Server 3 is started and an election is initiated. At this time, both servers 1 and 2 will change their votes to server 3. The result of this poll: Server 1 has 0 votes, Server 2 has 0 votes, and Server 3 has 3 votes. At this time, server 3 has more than half of the votes, and server 3 is elected as the leader. Servers 1, 2 change their status to FOLLOWING, and server 3 changes their status to LEADING;

(4) Server 4 starts and initiates an election. At this time, servers 1, 2, and 3 are no longer in the LOOKING state, and the ballot information will not be changed. The result of the vote information exchange: Server 3 has 3 votes, and Server 4 has 1 vote. At this time, server 4 obeys the majority, changes the vote information to server 3, and changes the status to FOLLOWING;

(5) Server 5 starts, and acts as a kid like 4.

Assuming that 5 machines are started at the same time, and the 5th is elected;

The criteria for judging a powerful election:

Compare Zxid ( The number of times the server executes data writing. The latest Zxid indicates the degree of the new and old server data. The larger the Zxid, the newer the server data;)

If the Zxid is the same, compare myid;

Write data Process

Read data, zookeeper global data is consistent;

Share pictures

Each server node maintains a queue to be written; a write request will not be written immediately, but will be added to the queue to be written; this write request may succeed or fail;

< p>The newly added write operation zxid must be larger than the original zxid in the server, and write it before Only the left —>write operation can enter the waiting queue; (there is no writing in the waiting queue; for example, the original zxid is 3, the new zxid is 6, and another meeting with zxid=5插入到6的前边,队列中是有序的)

算法推演过程:

成功:

Leader收到半数以上Server的成功信息,包括Leader自己;3台服务器,有2台同意了,则Leader就会广播,Server中待写队列的数据才会写成功;

(如执行set /data1 “Hello” ,在自己的Server节点zxid是最新,但在其他server中却不一定是最新的,因为网络通信有延迟,本地操作却是很快的)

失败:

server1收到写请求,交给leader,leader发给server1和server2;同时server2也收到写请求,交给leader,leader也要发给server1和2;

按leader收到的顺序是1、2,由于网络原因,server1先收到1,再收到2;server2收到2、1;于是1就加入失败;

 

待写队列中有两条写请求zxid=6和zxid=7,同时转发给leader,leader广播给所有的server;结果7号大家先同意accept了;leader就让大家写;

由于网络原因,6才收到写请求,此时最新的zxid=7是大于6的 ==>写失败;

leader先发送写请求,再批准写请求;发送的过程不一定收到成功信息,假如收到半数以上失败的,写就失败了,leader就广播大家把这个数据从待写队列中移除; 

单个节点掉丢了

5个节点;leader发送写请求,有两个节点不同意,3个节点同意;leader广播所有的server开始写数据;原来不同意的两个节点原地自杀,它俩就不对外提供服务了,它俩数据出现不一致的问题,跟集群不同步了,然后它俩就去找leader按照它的zxid依次拉取数据把信息同步过来;

 

通过ZAB协议,在基于消息传递模型的情况下,zookeeper才能保持全局数据的一致性;

写请求先转发给leader —> leader要把写请求转发给所有的server, —>它们开始投票,同意or不同意 —>leader统计票数发布结果; —>广播给各个server要么写要么让server把请求从队列中移除;

 Observer

观察者;随着集群的扩张(数量| 横向),写数据愈来愈麻烦,写效率变慢,读服务的并发效率则是越来越高的;

为了解决这种矛盾引入observer,只听命令不投票; 对外可提供读服务,不投票(写请求是否成功它不管,它没有投票权其他都是一样的);;

如3台server,引入2台observer,写性能还是由原来的3个决定,写性能不能,可大幅度提升集群的并发读性能;

 

一般集群是搭在数据中心内部,但有些大公司zookeeper集群可能分布在不同的数据中心当中;

如三个数据中心DC1、DC2、DC3,各个中心中有3个server,DC1中有一个leader; 

DC1中的3台中1台当leader,另外2个当fllower;DC2、DC3中的zookeeper6台server当observer,它们不参与投票;

1. 解压到指定目录 [[email protected] software]$ tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module//opt/module/zookeeper-3.4.10/conf这个路径下的zoo_sample.cfg修改为zoo.cfg; 2. 把conf文件夹下配置文件改个名字 mv zoo_sample.cfg zoo.cfg
打开zoo.cfg文件,修改dataDir路径:
3. 编辑zoo.cfg,配置da tadir datadir=/opt/module/zookeeper-3.4.10/zkData 在/opt/module/zookeeper-3.4.10/这个目录上创建zkData文件夹 [[email protected] zookeeper-3.4.10]$ mkdir zkData 4. 配置集群机器,每台机器分配一个不同的Serverid server.1=hadoop101:2888:3888 server.2=hadoop102:2888:3888 server.3=hadoop103:2888:3888 5. 在zkData文件夹里新建一个myid文件,内容是本机的Serverid 6. 配置了一下Zookeeper的LogDIR:配置bin/zkEnv.sh文件 ZOO_LOG_DIR="."改为自定义的日志目录/opt/module/zookeeper-3.4.10/logs
启动:
7. bin/zkServer.sh start 查看进程是否启动 [[email protected] zookeeper-3.4.10]$ jps ##每个进程是来提供服务的;   4020 Jps   4001 QuorumPeerMain 查看状态: [[email protected] zookeeper-3.4.10]$ bin/zkServer.sh status 启动客户端: [[email protected] zookeeper-3.4.10]$ bin/zkCli.sh ##客户端来连接集群;

  WATCHER::

  WatchedEvent state:SyncConnected type:None path:null
  [zk: localhost:2181(CONNECTED) 0]

  [[email protected] zookeeper-3.4.10]$ jps
    3248 ZooKeeperMain   ##服务端
    3076 QuorumPeerMain  ##运行的客户端
    3291 Jps< /p>

 退出客户端: [zk: localhost:2181(CONNECTED) 0] quit
停止Zookeeper
[[email protected] zookeeper-3.4.10]$ bin/zkServer.sh stop

存数据+通知机制 如果把服务器给kill了,它就会出现拒绝连接;默认连接的是本地的;如果启动的时候指定了服务器,把它的服务器kill掉,它就会直接跳转; [zk: localhost:2181(CONNECTED) 1] ls /zookeeper ##这个是zookeeper自带的节点; [quota] [zk: localhost:2181(CONNECTED) 2] ls / watch [test2, test40000000004, zookeeper, test1] 在另外一台客户端上create: [zk: localhost:2181(CONNECTED) 0] create /data1 "heihei" ##创建节点的时候一定要告诉它数据是什么 Created /data1 [zk: localhost:2181(CONNECTED) 3] WATCHER:: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/ 再创建第二个节点就没反应了;zookeeper的观察机制单次有效; 因为每个节点都有一个watchingList,这时服务端就有观察能力了;ls / watch注册观察的是根目录,客户端就会申请,zookeeper把数据仍进根目录的watchingList;如果watchingList发生变化,它就要去通知所有注册过的客户端,每通知一个就从list名单中划掉; 普通创建  -s 含有序列 -e 临时(重启或者超时消失) [zk: localhost:2181(CONNECTED) 2] create -s /data2 1235 Created /data20000000006 [zk: localhost:2181(CONNECTED) 3] create -e /linshi 001 Created /linshi [zk: localhost:2181(CONNECTED) 6] quit ### Quitting... 2019-01-27 00:29:27,946 [myid:] - INFO [main:[email protected]684] - Session: 0x1688adaecec0001 closed 2019-01-27 00:29:27,954 [myid:] - INFO [main-EventThread:[email protected]519] - EventThread shut down for session: 0x1688adaecec0001 不同客户端之间是互相独立的,只有从自己创建的节点quit了,在另外一个客户端上这个节点(2s内)才消失; ===> 一共4种节点类型: 有序持久-s; [zk: localhost:2181 (CONNECTED) 1] create -s /order 111 Created /order0000000009 有序短暂 -s -e; [zk: localhost:2181(CONNECTED) 2] create -s -e /orderAndShort 222 Created /orderAndShort0000000010 无序持久; [zk: localhost:2181(CONNECTED) 3] create /long 333 Created /long 无序短暂-e; [zk: localhost:2181(CONNECTED) 4] create -e /short 444 Created /short

[zk: localhost:2181(CONNECTED) 2] get /test2 watch ##此时监听的是节点的内容; 而ls 是监听节点的路径变化; abcd cZxid = 0x200000003 ctime = Sat Jan 26 15:23:14 CST 2019 mZxid = 0x200000003 mtime = Sat Jan 26 15:23:14 CST 2019 pZxid = 0x400000015 cversion = 1 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 4 numChildren = 1 [zk: localhost:2181(CONNECTED) 3] WATCHER:: WatchedEvent state:SyncCon nected type:NodeDataChanged path:/test2 [zk: localhost:2181(CONNECTED) 4] create /test2/test0 123 ##改变节点的路径,创建一个子节点;字节点的值没变化; Created /test2/test0 [zk: localhost:2181(CONNECTED) 5] set /test2 QQ ##set是改变节点的值 cZxid = 0x200000003 ctime = Sat Jan 26 15:23:14 CST 2019 mZxid = 0x400000016 mtime = Sun Jan 27 00:53:19 CST 2019 pZxid = 0x400000015 cversion = 1 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 2 numChildren = 1 [zk: localhost:2181(CONNECTED) 6] stat /test2 cZxid = 0x200000003 ##表示第几次操作节点;2表服务端第2次启动; 00000003当次启动下的第几次操作(16进制)创建了这个节点; ctime = Sat Jan 26 15:23:14 CST 2019 ##创建时间,long型时间戳; mZxid = 0x400000016 ##表服务器在第4次启动时的00000016次修改了这个节点的数据; mtime = Sun Jan 27 00:53:19 CST 2019 #修改时间 pZxid = 0x400000015 ##表服务器在第4次启动时第00000015次创建了子节点; cversion = 1 dataVersion = 1 #修改过数据 aclVersion = 0 #access control list访问控制列表,网络版的权限控制;控制网络上哪些人可访问节点;0版本是都可以反问的acl ephemeralOwner = 0x0 #假如是临时节点,0x0这里就不会是0了;如果是临时节点就会显示出所有者;当你的所有者离线后它就自然消失了; dataLength = 2 #数据长度 numChildren = 1 #子节点的数量; [zk: localhost:2181(CONNECTED) 1] create -e /testXXX 123 Created /testXXX [ zk: localhost:2181(CONNECTED) 2] stat /testXXX cZxid = 0x400000019 ctime = Sun Jan 27 01:19:06 CST 2019 mZxid = 0x400000019 mtime = Sun Jan 27 01:19:06 CST 2019 pZxid = 0x400000019 cvers ion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x1688adaecec0006 #sessionid = 0x1688adaecec0006 客户端和服务器之后的会话id; dataLength = 3 numChildren = 0 [zk: localhost:2181(CONNECTED) 3] delete删除没有子节点的; rmr 是可以删除带有子节点的; rmr /test2

[[email protected] zookeeper-3.4.10]$ ssh hadoop103 /opt/module/zookeeper-3.4.10/bin/zkServer.sh st atus 
ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg Error contacting service. It is probably not running. ###source这里注意要有空格 [[email protected] zookeeper-3.4.10]$ ssh hadoop103 source /etc/profile && /opt/module/zookeeper-3.4.10/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower [[email protected] zookeeper-3.4.10]$ hadoop103要配置下JAVA_HOME的环境变量::: #JAVA_HOME export JAVA_HOME=/opt/module/jdk1.8.0_ 144 export PATH=$PATH:$JAVA_HOME/bin [[email protected] zookeeper-3.4.10]$ ssh hadoop103 /opt/module/zookeeper-3.4.10/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower

public class Zookeeper{


private ZooKeeper zkClient;
public static final String CONNECT_STRING = "hadoop101:2181,hadoop102:2181,hadoop103:2181";
public static final int SESSION_TIMEOUT = 2000;
@Before
public void before() throws IOException {
//集群地址;会话过期时间; 匿名内部类, 回调函数; 主进程不会停,根节点有变化通过回调函数告知
zkClient = new ZooKeeper(CONNECT_STRING, SESSION_TIMEOUT, new Watcher() { //创建ZooKeeper客户端时
public void process(WatchedEvent event) {
System.out.println(
"默认的回调函数"); //没有监视任何节点,可写可不写
}
});
}
//创建
@Test
public void create() throws KeeperException, InterruptedException {
String s
= zkClient.create("/APITest", "123".getBytes(),
ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
//节点的权限访问;临时有序节点
System.out.println(s);
Thread.sleep(Long.MAX_VALUE);
}
//递归--> 可反复调用
@Test
public void getChildren() throws KeeperException, InterruptedException {
List
children = zkClient.getChildren("/", new Watcher() { //watch: true 会监听,调用默认的回调函数,监听一次有效;
// 还可以写new Watch 就不用默认的回调函数了;
public void process(WatchedEvent watchedEvent) {
try {
System.out.println(
"自己的回调函数");
getChildren();
//可反复监听;监听根目录 watcher.process(pair.event);
} catch (KeeperException e) {
e.printStackTrace();
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
});
for (String child : children) {
System.out.println(child);
}
System.out.println(
"==================");
}
@Test
//可反复调用,反复监听
public void testGet() throws KeeperException, InterruptedException {
getChildren();
Thread.sleep(Long.MAX_VALUE);
//主线程被阻塞,说明回调的时候不是主线程

}
// 判断znode是否存在
@Test
public void exist() throws KeeperException, InterruptedException {
Stat stat
= zkClient.exists("/zookeeper1", false);
if (stat == null){
System.out.println(
"节点不存在");
}
else{
System.out.println(stat.getDataLength());
}
}
}

ClientCnxn.java: sendThread = new SendThread(clientCnxnSocket); //connect就是sendThread负责网络连接通信; eventThread = new EventThread();        //listener就是eventThread负责监听 这里创建了两个子线程; class SendThread extends ZooKeeperThread public class ZooKeeperThread extends Thread public void start() { sendThread.start(); 由客户端向zookeeper发送信息的线程; eventThread.start(); zookeeper发生变化来通知,由eventThread负责接收事件的变化;eventThread负责调用的回调函数,zookeeper发生了变化它把这个变化发给eventThread }

Leave a Comment

Your email address will not be published.