EUREKA’s work principle and its difference from ZooKeeper

1. Introduction to Eureka:

Eureka is a tool for service registration and discovery produced by Netflix. Spring Cloud integrates Eureka and provides out-of-the-box support. Among them, Eureka can be subdivided into Eureka Server and Eureka Client.

Share a picture

1. Basic Principle

The above figure is the official architecture diagram from eureka, which is based on eureka cluster configuration;
– Eureka at different nodes synchronizes data through Replicate
– Application Service is the service provider
– Application Client is the service consumer
– Make Remote Call completes a service call

Service start After registering with Eureka, Eureka Server will synchronize the registration information with other Eureka Servers. When a service consumer wants to call a service provider, it obtains the service provider address from the service registry, and then caches the service provider address locally. The next time it is called, it will be directly fetched from the local cache to complete a call.

When the service registry Eureka Server detects that the service provider is unavailable due to downtime or network reasons, it will set the service to the DOWN state in the service registry and set the current service The provider status is published to the subscribers, and the subscribed service consumers update the local cache.

After the service provider is started, it periodically (30 seconds by default) sends a heartbeat to the Eureka Server to prove that the current service is available. If Eureka Server does not receive the client’s heartbeat within a certain period of time (90 seconds by default), it is considered that the service is down and the instance is logged out.

2. Eureka’s self-protection mechanism

In the default configuration, Eureka Server does not get the client’s heartbeat in the default 90s, then log out the instance , But often because microservices are called across processes, network communication often faces various problems. For example, microservices are in normal state, but because of network partition failures, Eureka Server unregisters service instances, which will make most microservices unavailable. Dangerous, because there is clearly no problem with the service.

In order to solve this problem, Eureka has a self-protection mechanism. The protection mechanism can be activated by configuring the following parameters in Eureka Server.

eureka.server.enable-self-preservation=true

Its principle is that when When the Eureka Server node loses too many clients in a short period of time (may have sent a network failure), then this node will enter the self-protection mode and will not log out any microservices. When the network failure is restored, the node will automatically exit itself Protection mode.

The architectural philosophy of the self-protection model is I would rather let one go, never kill a thousand by mistake

3. As a service registry, Eureka is better than Zookeeper

The famous CAP theory points out that a distributed system cannot satisfy C (consistency), A (availability) and P (partition fault tolerance). Since partition fault tolerance must be guaranteed in a distributed system, we can only make a trade-off between A and C. Here Zookeeper guarantees CP, while Eureka is AP.

3.1 Zookeeper guarantee CP

When querying the registry for the service list, we can tolerate that the registry returns the registration information a few minutes ago , But can not accept the service is directly down and unavailable. In other words, the service registration function has higher requirements for availability than consistency. But zk will have such a situation, when the master node loses contact with other nodes due to a network failure, the remaining nodes will re-elect the leader. The problem is that the leader election time is too long, 30 ~ 120s, and the entire zk cluster is unavailable during the election, which leads to the paralysis of the registration service during the election. In a cloud deployment environment, it is a high probability that the zk cluster will lose the master node due to network problems. Although the service can eventually be restored, the long-term unavailability of registration caused by the long election time is intolerable.

3.2 Eureka guarantees AP

Eureka understands this, so we prioritize usability when designing. Each node of Eureka is equal, and the failure of several nodes will not affect the work of normal nodes, and the remaining nodes can still provide registration and query services. When the Eureka client registers with an Eureka or finds that the connection fails, it will automatically switch to other nodes. As long as one Eureka is still there, the registration service can be guaranteed (guaranteed availability), but the information is found May not be up-to-date (strong consistency is not guaranteed). In addition, Eureka also has a self-protection mechanism. If more than 85% of the nodes do not have a normal heartbeat within 15 minutes, then Eureka believes that there is a network failure between the client and the registry, and the following will occur at this time Situation:
1. Eureka no longer removes services from the registration list that should expire because it has not received a heartbeat for a long time.
2. Eureka can still accept new service registration and query requests, but it will not be synchronized to On other nodes (that is, ensure that the current node is still available)
3. When the network is stable, the new registration information of the current instance will be synchronized to other nodes

Therefore, Eureka can cope with the cause well. Network failures cause some nodes to lose contact without paralyzing the entire registration service like zookeeper.

4. Summary

As a pure service registration center, Eureka is more “professional” than zookeeper, because the more important thing for registration services is availability. We can accept a situation where consistency is not achieved in the short term. However, the current 1.X version of Eureka is a servlet-based Java web application, and its extreme performance will definitely be affected. It is expected that the 2.X version under development can be independent from the servlet and become a separately deployable and executable service.

Reposted from: Zhou Hengxia https://www.cnblogs.com/snowjeblog/p/8821325.html

Leave a Comment

Your email address will not be published.