ZooKeeper cluster deployment details


I. Introduction to ZooKeeper< /h2>

ZooKeeperis one distributed, open sourcedistributed applicationcoordination service , YesGoogleofChubbyoneOpen sourceThe realization of isHadoopandHbase important components. ZooKeepe provides basic services for coordinating distributed applications, and it exposes a set of common services to external applications ——Distributed synchronization (Distributed Synchronization), naming service (NamingService), cluster maintenance (Group Maintenance), etc. ZooKeeper’s goal is to encapsulate key services that are complex and error-prone, combining simple and easy-to-use interfaces with A system with high performance and stable function is provided to users. ZooKeeper contains a simple set of primitives, providedJavaandCthe interface. ZooKeeperThe code version provides interfaces for distributed exclusive locks, elections, and queues. The code Inzookeeper-3.4.8\src\recipes. Among them are distributed locks and queuesJavaandCTwo versions, election onlyJavaversion. ZooKeeper itself can be usedStandalone The mode is installed and running, but it is more powerful through distributedZooKeeper< span style="color:#333333">cluster (oneLeaderand multipleFollower), based on a certain strategy to ensureZooKeeperThe stability and availability of the cluster, so as to achieve the reliability of distributed applications.

2. ZooKeeper Standalone mode (ZooKeeper standalone)

First we download the ZooKeeper software package from the Apache official website http://zookeeper.apache.org/, we take version 3.4.8 (zookeeper-3.4.8.tar.gz) as an example, a separate one The server deployment is very simple. After decompression, the ZooKeeper server process can be started with a simple configuration.

Modify zoo_sample.cfg under zookeeper-3.4.8/conf directory to zoo.cfg, the content of the configuration file is as follows:

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

< strong># do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/usr/local /zookeeper-3.4.8/conf

# the port at which the clients will connect< /strong>

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

There are comments on the meaning, so everyone can understand it.

Start the ZooKeeper server process below:

cd /usr/local/zookeeper-3.4.8/binzkServer .shstart

You can view the ZooKeeper server process through the jps command, the name is QuorumPeerMain.

Connect the ZooKeeper server on the client, the command is as follows:

zkCli.sh-server kinder:2181< /pre> 

The kinder above is the host name. If executed on this machine, you can execute the following command:

zkCli.sh

Can view client connection information.

You can use help to view the basic operation commands of the Zookeeper client.

Three, ZooKeeper Distributed mode (ZooKeeper distributed)

The distributed deployment of ZooKeeper is also called cluster deployment, and it is relatively simple.

ZooKeeper cluster is an independent distributed coordination service cluster. "Independence" means that if you want to use ZooKeeper to realize the coordination and management of distributed applications, simplify coordination and management, any distributed application It can be used, thanks to Zookeeper's data model (DataModel) and hierarchical namespace (HierarchicalNamespace) structure. For details, please refer to:

http://zookeeper.apache.org/doc/trunk/zookeeperOver.html

When designing distributed application coordination services, the first thing to consider is how to organize Hierarchical namespace.

The following describes the installation and configuration of the distributed mode, the process is as follows:

First step: Host name to IP address mapping configuration

There are two key roles in the ZooKeeper cluster: Leader and Follower. All nodes in the cluster provide services to distributed applications as a whole. Each node in the cluster is connected to each other. Therefore, when the ZooKeeper cluster is configured, the mapping from host to IP address of each node is To configure the mapping information of other nodes in the cluster.

For example, the configuration of each node in the ZooKeeper cluster, taking zoo1 as an example, the content of /etc/hosts is as follows:

192.168.1.111 zoo1192.168.1.112 zoo2192.168.1.113 zoo3192.168.1.114 zoo4192.168.1.115 zoo5

Used by ZooKeeper An election algorithm called Leaderelection. In the entire cluster running process, there is only one Leader, and the others are Followers. If the ZooKeeper cluster has a problem with the Leader during the running process, the system will use this algorithm to re-select a Leader. Therefore, to be able to ensure interconnection between each node, the above mapping must be configured.

When the ZooKeeper cluster is started, it will first select a leader. In the leaderelection process, a certain node that satisfies the election calculation can become the leader. The architecture of the entire cluster can be referred to:

http://zookeeper.apache.org/doc/trunk/zookeeperOver.html#sc_designGoals

Step 2: Modify the ZooKeeper configuration file

First on a machine zoo1, unzip zookeeper-3.4.8.tar.gz, modify the configuration file conf/zoo.cfg, the content is as follows:< /p>

tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper-3.4.8/confserver.1=zoo1:2888: 3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888server.4=zoo4:2888:3888 server.5=zoo5:2888:3888clientPort=2181

For the description of the above configuration content, please refer to:

http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html#sc_RunningReplicatedZooKeeper< /span>

Step 3: Remotely copy and distribute the installation files

ZooKeeper is configured on the zoo1, and now the configured installation file can be remotely copied to the directory corresponding to each node in the cluster:

cd /usr/localscp -rzookeeper-3.4.8/ root@zoo2:/usr/local/scp -rzookeeper-3.4.8/ root@zoo3:/usr/local/scp -rzookeeper-3.4.8 / root@zoo4:/usr/local/scp -rzookeeper-3.4.8/ root@zoo5:/usr/local/

Step 4: Set myid

Create a myid file under the directory specified by the dataDir we configured, and the content inside is A number, used to identify the current host, the number of X in the server.X configured in the conf/zoo.cfg file is the number in the myid file. For example, the content of myid in zoo1 is 1, and the content of myid in zoo2 is 2...

Step 5: Start the ZooKeeper cluster

In each node of the ZooKeeper cluster Click to execute the script to start ZooKeeper service:

cd /usr/local/zookeeper-3.4.8/binzkServer.shstart< /pre> 

You can view the zookeeper.out log after startup.

The startup sequence is zoo1> zoo2> zoo3> zoo4> zoo5. When the ZooKeeper cluster starts, every node tries to connect to other nodes in the cluster, except zoo5, which can connect to all nodes. All other nodes have not been started when they are started, so abnormal logs will appear, which is normal. After starting to select a leader, it finally stabilized.

Step Six: Installation Verification

You can check the startup status through the ZooKeeper script, including each node in the cluster The role of (or Leader or Follower), as shown below, is the result of querying on each node in the ZooKeeper cluster:

cd /usr/local/zookeeper-3.4.8/binzkServer.shstatus

You will get the following:

< /p>

1 Mode:leader4 Mode:follower

It can be seen from the above status query results, zoo1 It is the leader of the cluster, and the remaining 4 nodes are followers.

另外,可以通过客户端脚本,连接到ZooKeeper集群上。 For the client, ZooKeeper is an ensemble. Connecting to the ZooKeeper cluster actually feels like exclusive services of the entire cluster. Therefore, you can establish a connection to the service cluster on any node.

Four. Supplementary description

The ip of the host name mapping must be consistent with the actual configuration. In the cluster The link can be established normally between each node, and the entire ZooKeeper cluster can be started successfully. Modify the /etc/hosts file of each node to configure the mapping configuration from host names to IP addresses of all nodes in the ZooKeeper cluster.

5. Reference link

You can refer to the following URL to learn more zookeeperKnowledge.

http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html

http://zookeeper.apache.org/doc/r3.3.2/zookeeperOver.html

http://zookeeper .apache.org/doc/r3.3.2/recipes.html

http://zookeeper.apache.org/doc/trunk/< /span>

http://wiki.apache.org/hadoop/ZooKeeper/Tutorial

http://wiki.apache.org/hadoop/ZooKeeper/FAQ

http://wiki.apache. org/hadoop/ZooKeeper/Troubleshooting

http://doc.okbase.net/congcong68/archive/112508.html< /p>

http://www.ibm.com/developerworks/cn/opensource/os-cn-zookeeper/

< br>

The end!

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 4805 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.