ZooKeeper application scenario

1.1. zookeeper application scenarios

1.1.1. Data publishing and subscription (configuration center, message queue, such asmq)

Publishing and subscribing model, the so-called configuration center, as the name implies, is that the publisher publishes data to the ZK node for subscribers to dynamically obtain data, so as to achieve centralized management and dynamic update of configuration information. For example, the global configuration information, the service address list of the service-based service framework, etc. are very suitable for use.

If you don’t understand, please look at the picture:

Share a picture

The above picture means: if the producer has made additions, deletions, changes, etc., it will notify consumers Do certain events.

1.1.2. Load balancing

Here The load balancing refers to soft load balancing. In a distributed environment, in order to ensure high availability, usually the same application or the same service provider will deploy multiple copies to achieve peer-to-peer services. Consumers need to choose one of these peer-to-peer servers to execute related business logic. The typical one is the producer in the message middleware, and the consumer load balances.

Load balancing of publishers and subscribers in message middleware, LinkedIn’s open source KafkaMQ and Ali open source The metaq uses zookeeper to achieve load balancing between producers and consumers. Here is an example of metaq:

Producer load balancing: When metaq sends a message, production When sending a message, the user must select a partition on a broker to send the message. Therefore, during the operation of metaq, all brokers and corresponding partition information will be registered to the designated node of ZK. The default strategy is a sequential round During the inquiry process, after the producer obtains the partition list through ZK, it will be organized into an ordered list of partitions in the order of brokerId and partition. When sending, select a partition to send the message in a cyclical manner from beginning to end.

Consumption load balancing: In the consumption process , A consumer will consume messages in one or more partitions, but a partition will only be consumed by one consumer. MetaQ’s consumption strategy is:

1. Each partition only mounts one consumer for the same group.

2. If the number of consumers in the same group is greater than the number of partitions, the extra consumers will not participate in consumption.

3. If the number of consumers in the same group is less than the number of partitions, some consumers need to undertake additional consumption tasks.

In the event of a consumer failure or restart, other consumers will perceive this change (through the < /span> zookeeper watch consumer list), and then perform load balancing again to ensure that all partitions have consumers for consumption.

1.1.3.naming service (naming server)

Name services are also a common scenario in distributed systems. In a distributed system, by using naming services, client applications can obtain resources or service addresses, providers, and other information based on a specified name. The named entities can usually be machines in the cluster, provided service addresses, remote objects, etc.-we can collectively call them names (Name). One of the more common ones is the list of service addresses in some distributed service frameworks. By calling the node creation API provided by ZK, it is easy to create a globally unique path, and this path can be used as a name.

Alibaba Group’s open source distributed service framework Dubbo uses ZooKeeper as its naming service and maintains For the global service address list, click here to view the Dubbo open source project. In the implementation of Dubbo:

When the service provider starts, it will send a notification to the designated node/ Write your own URL address in the dubbo/${serviceName}/providers directory, and this operation completes the service release.

When the service consumer starts, subscribe/dubbo/${serviceName}/providers under the directory Provide the URL address of the provider, and write your own URL address to the /dubbo/${serviceName}/consumers directory.

Note that all addresses registered withZK are temporary nodes, so that the service can be guaranteed Consumers and consumers can automatically sense changes in resources. In addition, Dubbo also monitors the granularity of services by subscribing to the information of all providers and consumers in the /dubbo/${serviceName} directory.

1.1.4. Distributed notification/coordination

Special watcher registration in ZooKeeper With the asynchronous notification mechanism, it can well realize the notification and coordination between different systems in a distributed environment, and realize the real-time processing of data changes. The method of use is usually that different systems register the same znode on ZK and monitor the changes of znode (including the content of znode itself and its subnodes). If one system updates the znode, then the other system can receive notifications and respond accordingly. Processing

1. Another heartbeat detection mechanism: The detection system and the detected system are not directly associated, but are associated with a node on zk, which greatly reduces System coupling.

2. Another system scheduling mode: a system consists of two parts: a console and a push system. The responsibility of the console is to control the push system to perform corresponding push work. Some operations performed by administrators on the console actually modify the status of certain nodes on ZK, and ZK notifies them of these changes to their registered Watcher client, that is, the push system, and then makes corresponding push tasks.

3. Another work reporting mode: Some are similar to the task distribution system. After the subtask is started, it will register a temporary node in zk and carry out its progress regularly. Report (write the progress back to this temporary node) so that the task manager can know the task progress in real time.

In short, using zookeeper for distributed notification and coordination can greatly reduce the coupling between systems< /strong>

If the above is vague, you can look at the following explanation:

< strong>Like dubbo uses zookeeper as the registration center as an example.

The service consumer first connects to the registry. Once a service provider is registered in the registry, the service consumer can be notified through event notification (there is a provider), The service consumer can get the service provider’s message to consume the service.

1.1.5. Cluster management andmaster election

1. Cluster machine monitoring: This is usually used in scenarios that have high requirements on the state of the machines in the cluster and the online rate of the machines. The machine responds to changes. In such a scenario, there is often a monitoring system that detects whether the cluster machines are alive in real time. The usual practice in the past is: the monitoring system periodically detects each machine through some means (such as ping), or each machine regularly reports “I’m still alive” to the monitoring system. This approach is feasible, but there are two obvious problems:

1. When the machines in the cluster change, there are more things involved in modification.

2. There is a certain delay.

UsingZooKeeper has two features, you can implement another cluster machine survivability monitoring system:

1. The client registers a Watcher on node x, then if the child node of x? changes, the client will be notified.

2. Create an EPHEMERAL type node. Once the client and server session ends or expires, the node will disappear.

For example, the monitoring system registers a Watcher on the /clusterServers node, and every time a machine is added dynamically in the future, then Just create an EPHEMERAL type node under /clusterServers: /clusterServers/{hostname}. In this way, the monitoring system can know the increase or decrease of the machine in real time, and the subsequent processing is the business of the monitoring system.

2. Master election is the most classic application scenario in zookeeper.

In a distributed environment, the same business applications are distributed on different machines, and some business logic (such as some consumption Time calculation, network I/O processing), often only need to be executed by one machine in the entire cluster, and the rest of the machines can share this result, which can greatly reduce duplication of work and improve performance, so this master The election is the main problem encountered in this scenario.

Using the strong consistency of ZooKeeper, it can ensure the global creation of nodes in the case of distributed high concurrency Uniqueness, that is: there are multiple client requests to create a /currentMaster node at the same time, and in the end, only one client request can be created successfully. Using this feature, you can easily select clusters in a distributed environment.

In addition, the evolution of this kind of scene is a dynamic master election. This will use the characteristics of the EPHEMERAL_SEQUENTIAL type node.

As mentioned above, of all client creation requests, only one can be created successfully. With a slight change here, all requests are allowed to be created successfully, but there must be a creation order, so a possible situation where all requests will eventually create results on ZK is this: /currentMaster/{sessionId} -1 ,/currentMaster/{sessionId}-2,/currentMaster/{sessionId}-3 ….. Each time the machine with the smallest serial number is selected as the Master, if this machine hangs up, because the node he created will be very small immediately, Then the smallest machine after that is the Master.

1. In the search system, if each machine in the cluster generates a full index, it is not only time-consuming, but also cannot guarantee the consistency of the index data between each other. Therefore, let the Master in the cluster generate the full index, and then synchronize it to other machines in the cluster. In addition, the disaster recovery measure for the master election is that you can manually specify the master at any time, that is, when the zk cannot obtain the master information, you can obtain the master from a place through, for example, http.

2. In Hbase, ZooKeeper is also used to implement dynamic HMaster elections. In the implementation of Hbase, the address of some ROOT tables and the address of HMaster will be stored on ZK. HRegionServer will also register itself in Zookeeper in the form of a temporary node (Ephemeral), so that HMaster can perceive the survival status of each HRegionServer at any time. At the same time, once there is a problem with HMaster, an HMaster will be re-elected to run, thus avoiding the single point of HMaster problem

< strong>

1.1.6. Distributed lock

distribution Type lock, this is mainly due to ZooKeeper guarantees strong data consistency for us. Lock services can be divided into two categories, one is to keep exclusive, and the other is to control timing.

1. The so-called keeping exclusive means that all clients that try to acquire the lock will eventually only have one who can successfully acquire the lock. The usual practice is to treat a znode on the zk as a lock, and implement it through the method of create znode. All clients create the /distribute_lock node, and the client that is successfully created in the end also owns the lock.

2. Control timing, that is, all the views that acquire the lock of the client will eventually be scheduled for execution, but there is a global timing. The approach is basically similar to the above, except that /distributelock already exists here, and the client creates a temporary ordered node under it (this can be controlled by the attribute of the node: CreateMode.EPHEMERALSEQUENTIAL to specify). The parent node of Zk (/distribute_lock) maintains a sequence to ensure the timing of the creation of child nodes, thus forming the global timing of each client.

Leave a Comment

Your email address will not be published.