Distributed system finishing

What are the dubbo load balancing strategies and cluster fault tolerance strategies?

Load balancing strategy

  • Random< span class="md-plain"> (random)

    By default, dubbo is a random load balance call to achieve load balancing. Different weights can be set for different instances of the provider, and load balancing will be performed according to the weights. The larger the weight, the higher the distribution of traffic. Generally, you can use this default. Up

  • Round Robin (polling)

    There is also roundrobin loadbalance. The default is to evenly send traffic to each machine. However, if the performance of each machine is different, it is easy to cause the load of the poorly-performing machine to be too high. Therefore, you need to adjust the weight at this time, so that the weight of the machine with poor performance is smaller, and the traffic is less

  • Least Active (least active)

    This is an automatic perception. If a machine’s performance is worse, the fewer requests will be received. The more inactive, the fewer requests will be made to the inactive machines with poor performance.

  • Consistent Hash (Consistent Hash)< /h4>

    Consistent Hash algorithm, requests with the same parameters must be distributed to a provider, and the provider hangs At the time, the remaining traffic will be evenly distributed based on the virtual nodes, and the jitter will not be too large.

cluster fault tolerance strategy

  • Fail over

    Failed to switch automatically, automatically retry other Machine, this is the default, which is commonly used in read operations

  • Fail fast

    If a call fails, it fails immediately, usually in write operations

  • Fail safe

    ignore when an exception occurs, often used for unimportant interface calls, such as logging

  • Fail back

    Failed to automatically record the request in the background, and then resend it regularly, which is more suitable for writing message queues

  • Forking

    call multiple providers in parallel, and return immediately as long as one succeeds

  • < li class="md-lis t-item">

    Broadcacst

    call all providers one by one

Dynamic proxy strategy?

Use javassist dynamic bytecode generation by default to create proxy classes

span>

but you can configure your own dynamic proxy strategy through the spi extension mechanism

What is dubbo’s spi idea?

In short, it’s service provider interface, what does it mean to put it plainly, for example, you have an interface, and now this interface has three implementation classes, then in the system Which implementation class should you choose for this interface at runtime? This requires spi, according to the specified configuration or the default configuration , To find the corresponding implementation class to load in, and then use the instance object of this implementation class.

Interface A -> Implement A1, implement A2, implement A3

Configure it, interface A = A2

When the system is actually running, it will load your configuration and use it to achieve A2 Instantiate an object to provide services

For example, you have to pass the jar package Provide an implementation for an interface, and then you put a file with the same name as the interface in the META-INF/services/ directory of your jar package, and the implementation of the specified interface is a certain class in your own jar package. Ok, someone else uses an interface, and then uses your jar package, and you will find which implementation class should be used for this interface through the file in your jar package when running.

This is a function provided by jdk.

For example, you have a project A, there is an interface A, and interface A is in the project There is no implementation class in A -> How to choose an implementation class for interface A when the system is running?

You can make a jar package yourself, META-INF/services/, put In the previous file, the file name is the interface name, interface A, the implementation class of interface A = com.xxx.service. implementation class A2. Let project A depend on your jar package, and then when the system is running, project A runs, and for interface A, it will scan all the jar packages it depends on, and look for it in each jar to see if there is a META -INF/services folder, if there is, look for it in it, is there a file with the name of interface A, if there is, look for the implementation of interface A you specified in it which class is in your jar package?

Where is the SPI mechanism generally used? plug-in extension scenario, for example, you develop a scenario for others to use Open source framework, if you want others to write a plug-in, plug it into your open source framework to extend a certain function.

The classic thinking is reflected in the Use, such as JDBC

java defines a set of jdbc interfaces, but java does not provide jdbc implementation classes

But when the project runs, which implementation classes of the JDBC interface should be used? Generally speaking, according to the database you are using, such as MySQL, you can set mysql-jdbc-connector.jar< span class="md-plain">, introduce in; oracle, you will oracle-jdbc-connector.jar< span class="md-plain">, bring it in.

in When the system runs, it encounters the interface you use JDBC, and it will use the implementation class provided in the jar you introduced at the bottom

But dubbo also uses the idea of ​​spi, but it doesn’t use the spi mechanism of jdk, it is a set of spi mechanism implemented by itself.

Protocol protocol = ExtensionLoader.getExtensionLoader(Protocol.class).getAdaptiveExtension() ;

Protocol interface, dubbo needs to judge, when the system is running, it should be selected Which implementation class of this Protocol interface is used to instantiate objects?

He will find a protocol you configure, and he will configure you Protocol implementation class, load it into the jvm, and then instantiate the object, just use your Protocol implementation class.

< span>Microkernel,Pluggable, large number of components, Protocol is responsible for the things that rpc calls, you can implement your own rpc call components, implement the Protocol interface, and just give yourself an implementation class.

This line of code is used extensively in dubbo, that is, for many components, one interface and multiple implementations are reserved, and then the corresponding implementation class is dynamically found according to the configuration when the system is running. If you don’t configure it, just follow the default implementation, no problem.

@SPI("dubbo")

public interface Protocol {
int getDefaultPort();

@Adaptive
Exporter export(Invoker invoker) throws RpcException;

@Adaptive
Invoker refer(Class type, URL url) throws RpcException;
?
void destroy();
}

In dubbo’s own jar, in /META_INF/dubbo/internal/com.alibaba.dubbo.rpc. ProtocolIn the file:

dubbo = com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol

http
= com.alibaba.dubbo.rpc.protocol.http.HttpProtocol
hessian
= com.alibaba.dubbo.rpc.protocol.hessian.HessianProtocol

So, we can see how the spi mechanism of dubbo is played by default. In fact, it is the Protocol interface. What @SPI(“dubbo”) said is through SPI Mechanism to provide the implementation class. The implementation class is found in the configuration file using dubbo as the default key. The configuration file name is the same as the fully qualified name of the interface. The default implementation can be found through dubbo as the key. com .alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol.

dubbo’s default network communication protocol is < span>dubboprotocol, used DubboProtocol

If you want to dynamically replace the default implementation class, you need to use @ Adaptiveinterface,Protocolinterface, there are two methods added The @Adaptive annotation means that the two interfaces will be implemented by the agent.

such as this Protocolinterface I made two @Adaptive annotations to annotate the method, and it will target the ProtocolGenerate a proxy class, the two methods of this proxy class will have proxy code in them, and the proxy code will be dynamically based on urlprotocol to get the key, the default is dubbo, you can also specify by yourself, if you specify another key, then you will get an instance of another implementation class. span>

Through the parameters in this url are not connected, you can control the dynamic use of different components to achieve the class< /span>

Well, let’s talk about how to extend the components in dubbo by yourself

span>

write a project yourself, if it’s the kind that can be packaged into a jar package, the inside src/main/resources directory, create a META-INF/services, and put a file in it called: com.alibaba. dubbo.rpc.Protocol, make a my=com.xxx.MyProtocol. Get the jar into the nexus private server by yourself.

Then build a dubbo provider project by yourself, rely on the jar you built in this project, and then configure it in the spring configuration file:

This time when the provider starts, it will be loaded into the my=com.xxx.MyProtocol in our jar package. code>In this line of configuration, the MyProtocol you defined will be used according to your configuration. This is a brief explanation. You can replace a large number of dubbo internals by the above method. Component, just throw a jar package of your own, and then configure it.

dubbo provides a large number of similar to the above Extension point, that is, if you want to extend something, just write a jar yourself, let your consumer or provider project depend on your jar, and configure the file corresponding to the interface name in the specified directory of your jar. The key= implementation class is used inside.

How to perform service governance, service degradation, failure retry and timeout retry based on dubbo ?

Service governance

  1. Call link is automatically generated< /span>

    A large-scale distributed system, or use the popular microservice architecture to Let's just say, a distributed system consists of a large number of services. So how do these services call each other? What is the call link? To be honest, almost no one can figure it out later, because there are too many services, maybe hundreds or even thousands of services.

    In a distributed system based on dubbo, the calls between each service are automatically recorded, and then the dependencies and call links between each service are automatically generated to make a picture and display it. , Everyone can see it, right.

    Service A -> Service B -> Service C

    -> Service E< /span>

    -> Service D

    -> Service F

    -> Service W

  2. < span class="md-plain">Service access pressure and duration statistics

    need to be automatic Count the number of calls and access delay between each interface and service, and it should be divided into two levels. One level is the interface granularity, that is, how many times each interface of each service is called per day, and what is the request delay of TP50, TP90, and TP99 respectively; the second level is from the source entrance, a complete After the request link has gone through dozens of services, complete a request, how many times a day the full link goes, and what are the TP50, TP90, and TP99 of the full link request delay.

    After all these things are settled, we can then see where the current system pressure is mainly and how to expand and optimize it

  3. Service availability

    Service layering (avoid cyclic dependency), call link failure monitoring and alarm, service authentication, monitoring of the availability of each service (interface call success rate? How many 9?) 99.99%, 99.9%, 99%

Service Downgrade

For example, service A calls service B. As a result, service B hangs up, and service A retries to call service B several times, but it still doesn’t work, just downgrade, take a backup logic, and return a response to the user

xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi
="http://www.w3.org/2001/XMLSchema-instance"
xmlns:dubbo
="http://code.alibabatech.com/schema/dubbo"
xsi:schemaLocation
="http://www.springframework.org/schema/beans http://www.springframework.org/schema/ beans/spring-beans.xsd http://code.alibabatech.com/schema/dubbo http://code.alibabatech.com/schema/dubbo/dubbo.xsd">

<dubbo:application name="dubbo-consumer" />
?
<dubbo:registry address="zookeeper://127.0.0.1:2181" />

<dubbo:reference id="fooService" interface="com.test.service.FooService" timeout= "10000" check="false" mock="return null">
dubbo:reference>
beans>

You can modify the mock Is true, and then implement a Mock class under the same path as the interface. The naming rule is the interface name plus the Mock suffix. Then implement your own downgrade logic in the Mock class.

public class< /span> HelloServiceMock implements HelloService {

public void sayHello() {
// Downgrade logic
}
}

Failed Retry

The so-called failed retry is if the consumer fails to call the provider, such as throwing If it is abnormal, it should be retried at this time, or the call can be retried if it times out.

<dubbo: reference id="xxxx" interface="xx" check="true" async="false"< /span> retries="3" timeout="2000"/>

< /div>

The interface of a certain service takes 5s, you can’t wait here, you configure After the timeout, wait for 2s. Before returning, just withdraw. I can’t wait for you

If it is overtime, timeout will set the timeout; if the call fails, it will automatically retry the specified number of times

p>

You can talk about how you set these parameters based on the specific scenarios of your company, timeout, general Set to 200ms, we think it can’t exceed 200ms and haven’t returned yet

retries, 3 times, set retries, usually when reading the request, for example, if you want to query data, you can set a retries, if you don’t read it the first time , Report an error, retry the specified number of times, try to read 2 times again

distributed How to design the idempotence of the service interface (for example, the deduction cannot be repeated)?

Three main points to ensure idempotence

  1. For Each request must have a unique identifier
  2. After each request is processed, there must be a record indicating that the request has been processed
  3. < h5 class="md-end-block md-heading"> Each time a request is received, it needs to perform logical processing to determine whether it has been processed before

How to design a dubbo-like rpc framework?

  1. < span class="md-plain">Registration Center

    Use to be able to store key-value pairA certain storage tool, such as zk redis database etc. Etc., and ip:port:interface nameand The specific implementation class of the service providerBind, and then the server starts socket monitoring< /span>

  2. Dynamic proxy service consumption者

    The consumer side introduces the interface provided by the server side, dynamically proxy the interface, let its proxy object use registration centerProvided IP:PORT:interface name, initiate a long socket connection to the server, write the method name and method parameters that need to be called, and then call the method on the server remotely, and the server will write the result back through the socket.

  3. Custom communication protocol

    In the process of socket data transmission, you can define the transmission protocol, whether to use json, java serializable, hessian, or protobuf, etc. This protocol, service providers and consumers must abide by

  4. Request load balancing< /span>

    If multiple service providers provide the same service, consumption needs to be distributed The remote call request of the user.

distributed service framework

What are the usage scenarios of zk?

  1. < span class="md-plain">distributed coordination (zk node Watch mechanism)

    This is actually a very classic usage of zk. Simply put, it's like, your system A sends a request to mq, and the message is processed by B after it is consumed. How does the A system know the processing result of the B system? With zk, the coordination between distributed systems can be realized. After system A sends the request, it can register a listener on the value of a certain node on zk. Once system B has processed it, modify the value of that node zk, and A can immediately receive the notification, which is a perfect solution.

  2. distributed lock (create orderly temporary nodes)

    right Two consecutive modification operations are issued for a certain data, and two machines receive the request at the same time, but only one machine can execute the other machine first and then execute it. Then you can use the zk distributed lock at this time. After a machine receives the request, first acquires a distributed lock on zk, that is, you can create a znode and then perform the operation; then another machine also tries to create the znode, It turned out that I couldn't create it because it was created by someone else. . . . Then you can only wait and execute it yourself after the first machine has finished executing

  3. Configuration Information Management

    Register a configuration node listener to zk through the client. Once the configuration node on zk has a new configuration or modified configuration, the client will receive a message from zk p>

  4. High availability< /span>

    For example, many big data systems such as hadoop, hdfs, yarn, etc., are based on zk develops the HA high-availability mechanism, that is, an important process will generally be the main and the backup. The main process will immediately switch to the standby process through zk perception.

  5. distributed lock

    What are the general methods for implementing distributed locks?

    • redis stand-alone implementation (setNX)

      The most common way of implementation is to create a key in redis to be locked

      SET my:lock random value NX PX 30000, this command is ok, this NX means that it will be set only when the key does not exist Success, PX 30000 means that the lock is automatically released after 30 seconds. When someone else creates it, if they find it already exists, they can't lock it.

      Release the lock is to delete the key, but generally you can delete it with a lua script, and only delete if the value is the same:

      if redis.call("get",KEYS[1]) == ARGV[1] then
      
      return redis.call("del",KEYS[1])
      else
      return 0
      end

       

      为啥要用随机值呢?因为如果某个客户端获取到了锁,但是阻塞了很长时间才执行完,此时可能已经自动释放锁了,此时可能别的客户端已经获取到了这个锁,要是你这个时候直接删除

      key

      的话会有问题,所以得用随机值加上面的

      lua

      脚本来释放锁

    • redis分布式实现(RedLock算法) -----不推荐使用

      在Redis的分布式环境中,们假设有N个Redis master。这些节点完全互相独立,不存在主从复制或者其他集群协调机制。们确保将在N个实例上使用与在Redis单实例下相同方法获取和释放锁。现在们假设有5个Redis master节点,同时们需要在5台服务器上面运行这些Redis实例,这样保证他们不会同时都宕掉。

      为了取到锁,客户端应该执行以下操作:

      • 获取当前Unix时间,以毫秒为单位。

      • 依次尝试从5个实例,使用相同的key和具有唯一性的value(例如UUID)获取锁。当向Redis请求获取锁时,客户端应该设置一个网络连接和响应超时时间,这个超时时间应该小于锁的失效时间。例如你的锁自动失效时间为10秒,则超时时间应该在5-50毫秒之间。这样可以避免服务器端Redis已经挂掉的情况下,客户端还在死死地等待响应结果。如果服务器端没有在规定时间内响应,客户端应该尽快尝试去另外一个Redis实例请求获取锁。

      • 客户端使用当前时间减去开始获取锁时间(步骤1记录的时间)就得到获取锁使用的时间。当且仅当从大多数(N/2+1,这里是3个节点)的Redis节点都取到锁,并且使用的时间小于锁失效时间时,锁才算获取成功

      • 如果取到了锁,key的真正有效时间等于有效时间减去获取锁所使用的时间(步骤3计算的结果)。

      • 如果因为某些原因,获取锁失败(没有在至少N/2+1个Redis实例取到锁或者取锁时间已经超过了有效时间),客户端应该在所有的Redis实例上进行解锁(即便某些Redis实例根本就没有加锁成功,防止某些节点获取到锁但是客户端没有得到响应而导致接下来的一段时间不能被重新获取锁)。

    • zookeeper实现

      zk分布式锁,其实可以做的比较简单,就是某个节点尝试创建临时znode,此时创建成功了就获取了这个锁;这个时候别的客户端来创建锁会失败,只能注册个监听器监听这个锁。释放锁就是删除这个znode,一旦释放掉就会通知客户端,然后有一个等待着的客户端就可以再次重新枷锁

    使用zk来设计分布式锁可以吗?

    package com.zookeeper.java.distributed_lock;
    
    ?
    import org.apache.zookeeper.CreateMode;
    import org.apache.zookeeper.KeeperException;
    import org.apache.zookeeper.ZooDefs;
    import org.apache.zookeeper.ZooKeeper;
    ?
    import java.util.*;
    import java.util.concurrent.CountDownLatch;
    ?
    /**
    * zk实现分布式锁
    *
    *
    @author zhuliang
    * @date 2019/6/19 12:17
    */
    public class DistributedLock {
    ?
    private ZooKeeper zooKeeper;
    private String root = "/LOCKS";
    private String lockId;
    private int sessionTimeout;
    private byte[] data = {1, 2};
    private CountDownLatch latch = new CountDownLatch(1);
    ?
    public DistributedLock() throws Exception {
    this.zooKeeper = ZookeeperFactory.getInstance();
    this.sessionTimeout = ZookeeperFactory.getSessionTimeout();
    }
    ?
    public boolean lock() {
    //创建新的临时有序节点 将其自动生成的序号返回值作为 锁id
    try {
    lockId
    = zooKeeper.create(root + "/", data, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
    System.out.println(Thread.currentThread().getName()
    + "-> 成功创建的lock节点[" + lockId + "] 开始竞争锁");
    //获取root下所有的子节点
    List children = zooKeeper.getChildren(root, true);
    //利用treeSet排序特性将其排序
    SortedSet sortedSet = new TreeSet<>();
    //并将其元素前面拼接上父节点路径
    for (String s : children) {
    sortedSet.add(root
    + "/" + s);
    }
    //获取最小的节点 如果最小的节点存在 且等于lockId 则可以获取锁
    if (sortedSet.first().equals(lockId)) {
    System.out.println(Thread.currentThread().getName()
    + "-> 成功获得锁 lock节点[" + lockId + "]");
    return true;
    }
    //如果不等于lockId
    SortedSet lessThanLockId = sortedSet.headSet(lockId);
    if (!lessThanLockId.isEmpty()) {
    //获取比当前lockId小的上一个节点 其实这个preLockId就是正在被使用的锁的id
    String preLockId = lessThanLockId.last();
    //然后给这个正在被使用的锁 添加一个watcher 当这个锁被调用delete,get,set的时候就会触发watch时间
    zooKeeper.exists(preLockId, new LockWatcher(latch));
    // latch.await(sessionTimeout, TimeUnit.MILLISECONDS);
    latch.await();
    //上面这段代码意味着如果会话超时或者节点被删除了
    System.out.println(Thread.currentThread().getName() + "-> 成功获得锁[" + lockId + "]");
    }
    return true;
    }
    catch (KeeperException e) {
    e.printStackTrace();
    }
    catch (InterruptedException e) {
    e.printStackTrace();
    }
    return false;
    }
    ?
    ?
    public boolean unlock() {
    System.out.println(Thread.currentThread().getName()
    + "-> 开始施放锁[" + lockId + "]");
    try {
    System.out.println(
    "节点[" + lockId + "]成功被删除");
    zooKeeper.delete(lockId,
    -1);
    return true;
    }
    catch (InterruptedException e) {
    e.printStackTrace();
    }
    catch (KeeperException e) {
    e.printStackTrace();
    }
    return false;
    }
    ?
    ?
    public static void main(String[] args) {
    CountDownLatch latch
    = new CountDownLatch(10);
    Random random
    = new Random();
    for (int i = 0; i < 10; i++) {
    new Thread(() -> {
    DistributedLock lock
    = null;
    try {
    lock
    = new DistributedLock();
    latch.countDown();
    latch.await();
    lock.lock();
    Thread.sleep(random.nextInt(
    3000));
    }
    catch (Exception e) {
    e.printStackTrace();
    }
    finally {
    if (lock != null) {
    lock.unlock();
    }
    }
    }).start();
    }
    }
    }
    /*
    * 最终结果 获得锁->释放锁 是按顺序进行的
    *
    Thread-3-> 成功创建的lock节点[/LOCKS/0000000090] 开始竞争锁
    Thread-2-> 成功创建的lock节点[/LOCKS/0000000091] 开始竞争锁
    Thread-7-> 成功创建的lock节点[/LOCKS/0000000093] 开始竞争锁
    Thread-6-> 成功创建的lock节点[/LOCKS/0000000094] 开始竞争锁
    Thread-0-> 成功创建的lock节点[/LOCKS/0000000092] 开始竞争锁
    Thread-4-> 成功创建的lock节点[/LOCKS/0000000095] 开始竞争锁
    Thread-1-> 成功创建的lock节点[/LOCKS/0000000096] 开始竞争锁
    Thread-8-> 成功创建的lock节点[/LOCKS/0000000097] 开始竞争锁
    Thread-9-> 成功创建的lock节点[/LOCKS/0000000098] 开始竞争锁
    Thread-5-> 成功创建的lock节点[/LOCKS/0000000099] 开始竞争锁
    Thread-3-> 成功获得锁 lock节点[/LOCKS/0000000090]
    Thread-3-> 开始施放锁[/LOCKS/0000000090]
    节点[/LOCKS/0000000090]成功被删除
    Thread-2-> 成功获得锁[/LOCKS/0000000091]
    Thread-2-> 开始施放锁[/LOCKS/0000000091]
    节点[/LOCKS/0000000091]成功被删除
    Thread-0-> 成功获得锁[/LOCKS/0000000092]
    Thread-0-> 开始施放锁[/LOCKS/0000000092]
    节点[/LOCKS/0000000092]成功被删除
    Thread-7-> 成功获得锁[/LOCKS/0000000093]
    Thread-7-> 开始施放锁[/LOCKS/0000000093]
    节点[/LOCKS/0000000093]成功被删除
    Thread-6-> 成功获得锁[/LOCKS/0000000094]
    Thread-6-> 开始施放锁[/LOCKS/0000000094]
    节点[/LOCKS/0000000094]成功被删除
    Thread-4-> 成功获得锁[/LOCKS/0000000095]
    Thread-4-> 开始施放锁[/LOCKS/0000000095]
    节点[/LOCKS/0000000095]成功被删除
    Thread-1-> 成功获得锁[/LOCKS/0000000096]
    Thread-1-> 开始施放锁[/LOCKS/0000000096]
    节点[/LOCKS/0000000096]成功被删除
    Thread-8-> 成功获得锁[/LOCKS/0000000097]
    Thread-8-> 开始施放锁[/LOCKS/0000000097]
    节点[/LOCKS/0000000097]成功被删除
    Thread-9-> 成功获得锁[/LOCKS/0000000098]
    Thread-9-> 开始施放锁[/LOCKS/0000000098]
    节点[/LOCKS/0000000098]成功被删除
    Thread-5-> 成功获得锁[/LOCKS/0000000099]
    Thread-5-> 开始施放锁[/LOCKS/0000000099]
    节点[/LOCKS/0000000099]成功被删除
    */

     

    这两种分布式锁的实现方式哪种效率比较高?

    redis分布式锁,其实需要自己不断去尝试获取锁,比较消耗性能

     

    zk分布式锁,获取不到锁,注册个监听器即可,不需要不断主动尝试获取锁,性能开销较小

     

    另外一点就是,如果是redis获取锁的那个客户端bug了或者挂了,那么只能等待超时时间之后才能释放锁;而zk的话,因为创建的是临时znode,只要客户端挂了,znode就没了,此时就自动释放锁

     

    redis分布式锁比较麻烦,遍历上锁,计算时间等等。 . . zk的分布式锁语义清晰实现简单

     

    所以先不分析太多的东西,就说这两点,个人实践认为zk的分布式锁比redis的分布式锁牢靠、而且模型简单易用

    分布式事务

    1. 两阶段提交(2PC)

      两阶段提交(Two-phase Commit,2PC),通过引入协调者Coordinator)来协调参与者的行为,并最终决定这些参与者是否要真正执行事务。

      1. 运行过程

      1.1 准备阶段

      协调者询问参与者事务是否执行成功,参与者发回事务执行结果。

      分享图片

      1.2 提交阶段

      如果事务在每个参与者上都执行成功,事务协调者发送通知让参与者提交事务;否则,协调者发送通知让参与者回滚事务。

      需要注意的是,在准备阶段,参与者执行了事务,但是还未提交。只有在提交阶段接收到协调者发来的通知后,才进行提交或者回滚。

      分享图片

      2. 存在的问题

      2.1 同步阻塞 所有事务参与者在等待其它参与者响应的时候都处于同步阻塞状态,无法进行其它操作。

      2.2 单点问题 协调者在 2PC 中起到非常大的作用,发生故障将会造成很大影响。特别是在阶段二发生故障,所有参与者会一直等待状态,无法完成其它操作。

      2.3 数据不一致 在阶段二,如果协调者只发送了部分 Commit 消息,此时网络发生异常,那么只有部分参与者接收到 Commit 消息,也就是说只有部分参与者提交了事务,使得系统数据不一致。

      2.4 太过保守 任意一个节点失败就会导致整个事务失败,没有完善的容错机制。

       

    2. 补偿事务(TCC)

      TCC 其实就是采用的补偿机制,其核心思想是:针对每个操作,都要注册一个与其对应的确认和补偿(撤销)操作。它分为三个阶段:

      • Try 阶段主要是对业务系统做检测及资源预留

      • Confirm 阶段主要是对业务系统做确认提交,Try阶段执行成功并开始执行 Confirm阶段时,默认 Confirm阶段是不会出错的。即:只要Try成功,Confirm一定成功。

      • Cancel 阶段主要是在业务执行错误,需要回滚的状态下执行的业务取消,预留资源释放。

      举个例子,假入 Bob 要向 Smith 转账,思路大概是: 们有一个本地方法,里面依次调用

      1. 首先在 Try 阶段,要先调用远程接口把 Smith 和 Bob 的钱给冻结起来。

      2. 在 Confirm 阶段,执行远程调用的转账的操作,转账成功进行解冻。

      3. 如果第2步执行成功,那么转账成功,如果第二步执行失败,则调用远程冻结接口对应的解冻方法 (Cancel)。

      优点: 跟2PC比起来,实现以及流程相对简单了一些,但数据的一致性比2PC也要差一些

      缺点: 缺点还是比较明显的,在2,3步中都有可能失败。 TCC属于应用层的一种补偿方式,所以需要程序员在实现的时候多写很多补偿的代码,在一些场景中,一些业务流程可能用TCC不太好定义及处理。

       

    3. 本地消息表(异步确保)

      本地消息表业务数据表处于同一个数据库中,这样就能利用本地事务来保证在对这两个表的操作满足事务特性,并且使用了消息队列来保证最终一致性

      1. 在分布式事务操作的一方完成写业务数据的操作之后向本地消息表发送一个消息,本地事务能保证这个消息一定会被写入本地消息表中。

      2. 之后将本地消息表中的消息转发到 Kafka 等消息队列中,如果转发成功则将消息从本地消息表中删除,否则继续重新转发。

      3. 在分布式事务操作的另一方从消息队列中读取一个消息,并执行消息中的操作。

      分享图片

      优点: 一种非常经典的实现,避免了分布式事务,实现了最终一致性。

      缺点: 消息表会耦合到业务系统中,如果没有封装好的解决方案,会有很多杂活需要处理。

       

    4. 可靠消息最终一致性方案(MQ 事务消息)

      这个的意思,就是干脆不要用本地的消息表了,直接基于MQ来实现事务。比如阿里的RocketMQ就支持消息事务。

       

      大概的意思就是:

      1)A系统先发送一个prepared消息到mq,如果这个prepared消息发送失败那么就直接取消操作别执行了

      2)如果这个消息发送成功过了,那么接着执行本地事务,如果成功就告诉mq发送确认消息,如果失败就告诉mq回滚消息

      3)如果发送了确认消息,那么此时B系统会接收到确认消息,然后执行本地的事务

      4)mq会自动定时轮询所有prepared消息回调你的接口,问你,这个消息是不是本地事务处理失败了,所有没发送确认消息?那是继续重试还是回滚?一般来说这里你就可以查下数据库看之前本地事务是否执行,如果回滚了,那么这里也回滚吧。这个就是避免可能本地事务执行成功了,别确认消息发送失败了。

      5)这个方案里,要是系统B的事务失败了咋办?重试咯,自动不断重试直到成功,如果实在是不行,要么就是针对重要的资金类业务进行回滚,比如B系统本地回滚后,想办法通知系统A也回滚;或者是发送报警由人工来手工回滚和补偿

       

      这个还是比较合适的,目前国内互联网公司大都是这么玩儿的,要不你举用RocketMQ支持的,要不你就自己基于类似ActiveMQ? RabbitMQ?自己封装一套类似的逻辑出来,总之思路就是这样子的

      优点: 实现了最终一致性,不需要依赖本地数据库事务。

      缺点: 实现难度大,主流MQ不支持,RocketMQ事务消息部分代码也未开源。

       

    5. 最大努力通知方案

      这个方案的大致意思就是:

      1)系统A本地事务执行完之后,发送个消息到MQ

      2)这里会有个专门消费MQ的最大努力通知服务,这个服务会消费MQ然后写入数据库中记录下来,或者是放入个内存队列也可以,接着调用系统B的接口

      3)要是系统B执行成功就ok了;要是系统B执行失败了,那么最大努力通知服务就定时尝试重新调用系统B,反复N次,最后还是不行就放弃

       

    6. 阿里GTS(Global Transaction Service)

      详情见官网

    分布式事务总结

    其实用任何一个分布式事务的这么一个方案,都会导致你那块儿代码会复杂10倍。很多情况下,系统A调用系统B、系统C、系统D,们可能根本就不做分布式事务。如果调用报错会打印异常日志

     

    每个月也就那么几个bug,很多bug是功能性的,体验性的,真的是涉及到数据层面的一些bug,一个月就几个,两三个?如果你为了确保系统自动保证数据100%不能错,上了几十个分布式事务,代码太复杂性能太差系统吞吐量性能大幅度下跌

     

    99%的分布式接口调用,不要做分布式事务,直接就是监控(发邮件、发短信)、记录日志(一旦出错,完整的日志)、事后快速的定位、排查和出解决方案、修复数据。

    每个月,每隔几个月,都会对少量的因为代码bug,导致出错的数据,进行人工的修复数据,自己临时动手写个程序,可能要补一些数据,可能要删除一些数据,可能要修改一些字段的值。

     

    比你做50个分布式事务,成本要来的低上百倍,低几十倍

     

    trade off,权衡,要用分布式事务的时候,一定是有成本,代码会很复杂,开发很长时间,性能和吞吐量下跌,系统更加复杂更加脆弱反而更加容易出bug;好处,如果做好了,TCC、可靠消息最终一致性方案,一定可以100%保证你那快数据不会出错。

     

    1%,0.1%,0.01%的业务,资金、交易、订单,们会用分布式事务方案来保证,会员积分、优惠券、商品信息,其实不要这么搞了

    分布式会话

    分布式session如何实现?

    1. Tomcat + Redis

      使用session的代码跟以前一样,还是基于tomcat原生的session支持即可,然后就是用一个叫做Tomcat RedisSessionManager的东西,让所有们部署的tomcat都将session数据存储到redis即可。

       

      在tomcat的配置文件中,配置一下

      <Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />
      
      <Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager"
      host
      ="{redis.host}"
      port
      ="{redis.port}"
      database
      ="{redis.dbnum}"
      maxInactiveInterval
      ="60"/>

       

      搞一个类似上面的配置即可,你看是不是就是用了RedisSessionManager,然后指定了redis的host和 port就ok了。

      <Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />
      
      <Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager"
      sentinelMaster
      ="mymaster"
      sentinels
      =":26379,:26379,:26379"
      maxInactiveInterval
      ="60"/>

       

      还可以用上面这种方式基于redis哨兵支持的redis高可用集群来保存session数据,都是ok的

    2. Spring Session + Redis

      分布式会话的这个东西重耦合在tomcat中,如果要将web容器迁移成jetty,难道你重新把jetty都配置一遍吗?

       

      因为上面那种tomcat + redis的方式好用,但是会严重依赖于web容器,不好将代码移植到其他web容器上去,尤其是你要是换了技术栈咋整?比如换成了spring cloud或者是spring boot之类的。还得好好思忖思忖。

       

      所以现在比较好的还是基于java一站式解决方案,spring了。人家spring基本上包掉了大部分的们需要使用的框架了,spirng cloud做微服务了,spring boot做脚手架了,所以用sping session是一个很好的选择。

      pom.xml

      <dependency>
      
      <groupId>org.springframework.sessiongroupId>
      <artifactId>spring-session-data-redisartifactId>
      <version>1.2.1.RELEASEversion>
      dependency>
      ?
      <dependency>
      <groupId>redis.clientsgroupId>
      <artifactId>jedisartifactId>
      <version>2.8.1version>
      dependency>

       

      spring配置文件中

      <bean id="redisHttpSessionConfiguration"
      
      class
      ="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration">
      <property name="maxInactiveIntervalInSeconds" value="600"/>
      bean>
      ?

      ?
      <bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
      <property name="maxTotal" value="100" />
      <property name="maxIdle" value="10" />
      bean>
      ?

      ?
      <bean id="jedisConnectionFactory"
      class
      ="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" destroy-method="destroy">
      <property name="hostName" value="${redis_hostname}"/>
      <property name="port" value="${redis_port}"/>
      <property name="password" value="${redis_pwd}" />
      <property name="timeout" value="3000"/>
      <property name="usePool" value="true"/>
      <property name="poolConfig" ref="jedisPoolConfig"/>
      bean>

       

      web.xml

      <filter>
      
      <filter-name>springSessionRepositoryFilterfilter-name>
      <filter-class>org.springframework.web.filter.DelegatingFilterProxyfilter-class>
      filter>
      ?
      <filter-mapping>
      <filter-name>springSessionRepositoryFilterfilter-name>
      <url-pattern>/*url-pattern>
      filter-mapping>

       

      示例代码

      @Controller
      
      @RequestMapping(
      "/test")
      public class TestController {
      ?
      @RequestMapping(
      "/putIntoSession")
      @ResponseBody
      public String putIntoSession(HttpServletRequest request, String username){
      request.getSession().setAttribute(
      "name", “leo”);
      return "ok";
      }

      @RequestMapping(
      "/getFromSession")
      @ResponseBody
      public String getFromSession(HttpServletRequest request, Model model){
      String name
      = request.getSession().getAttribute("name");
      return name;
      }
      }

       

      上面的代码就是ok的,给spring session配置基于redis来存储session数据,然后配置了一个spring session的过滤器,这样的话,session相关操作都会交给spring session来管了。接着在代码中,就用原生的session操作,就是直接基于spring sesion从redis中获取数据了。

       

      实现分布式的会话,有很多种很多种方式,说的只不过比较常见的两种方式,tomcat + redis早期比较常用;近些年,重耦合到tomcat中去,通过spring session来实现。

       

    3. 不使用session(使用JWT)

      使用 JWT(Java Web Token)储存用户身份,然后再从数据库或者 cache 中获取其他的信息。这样无论请求分配到哪个服务器都无所谓

    如何设计一个高并发系统?

    分享图片其实所谓的高并发,如果要理解这个问题呢,其实就得从高并发的根源出发,为什么会有高并发?

     

    说的浅显一点,很简单,就是因为刚开始系统都是连接数据库的,但是要知道数据库支撑到每秒并发两三千的时候,基本就快完了。所以才有说,很多公司,刚开始干的时候,技术比较low,结果业务发展太快,有的时候系统扛不住压力就挂了。

     

    当然会挂了,凭什么不挂?数据库如果瞬间承载每秒5000,8000,甚至上万的并发,一定会宕机,因为比如mysql就压根儿扛不住这么高的并发量。

     

    所以为啥高并发牛逼?就是因为现在用互联网的人越来越多,很多app、网站、系统承载的都是高并发请求,可能高峰期每秒并发量几千,很正常的。如果是什么双十一了之类的,每秒并发几万几十万都有可能。

     

    高并发架构

     

    (1)系统拆分,将一个系统拆分为多个子系统,用dubbo来搞。然后每个系统连一个数据库,这样本来就一个库,现在多个数据库,不也可以抗高并发么。

     

    (2)缓存,必须得用缓存。大部分的高并发场景,都是读多写少,那完全可以在数据库和缓存里都写一份,然后读的时候大量走缓存不就得了。毕竟人家redis轻轻松松单机几万的并发啊。 no problem.所以可以考虑考虑你的项目里,那些承载主要请求的读场景,怎么用缓存来抗高并发。

     

    (3)MQ,必须得用MQ。可能还是会出现高并发写的场景,比如说一个业务操作里要频繁搞数据库几十次,增删改增删改,疯了。那高并发绝对搞挂你的系统,你要是用redis来承载写那肯定不行,人家是缓存,数据随时就被LRU了,数据格式还无比简单,没有事务支持。所以该用mysql还得用mysql啊。那咋办?用MQ吧,大量的写请求灌入MQ里,排队慢慢玩儿,后边系统消费后慢慢写,控制在mysql承载范围之内。所以你得考虑考虑你的项目里,那些承载复杂写业务逻辑的场景里,如何用MQ来异步写,提升并发性。 MQ单机抗几万并发也是ok的,这个之前还特意说过。

     

    (4)分库分表,可能到了最后数据库层面还是免不了抗高并发的要求,好吧,那么就将一个数据库拆分为多个库,多个库来抗更高的并发;然后将一个表拆分为多个表,每个表的数据量保持少一点,提高sql跑的性能。

     

    (5)读写分离,这个就是说大部分时候数据库可能也是读多写少,没必要所有请求都集中在一个库上吧,可以搞个主从架构,主库写入,从库读取,搞一个读写分离。读流量太多的时候,还可以加更多的从库。

     

    (6)Elasticsearch,可以考虑用es。 es是分布式的,可以随便扩容,分布式天然就可以支撑高并发,因为动不动就可以扩容加机器来抗更高的并发。那么一些比较简单的查询、统计类的操作,可以考虑用es来承载,还有一些全文搜索类的操作,也可以考虑用es来承载。

     

     

    上面的6点,基本就是高并发系统肯定要干的一些事儿,大家可以仔细结合之前讲过的知识考虑一下,到时候你可以系统的把这块阐述一下,然后每个部分要注意哪些问题,之前都讲过了,你都可以阐述阐述,表明对这块是有点积累的。

     

    说句实话,毕竟真正厉害的一点,不是在于弄明白一些技术,或者大概知道一个高并发系统应该长什么样?其实实际上在真正的复杂的业务系统里,做高并发要远远比这个图复杂几十倍到上百倍。需要考虑,哪些需要分库分表,哪些不需要分库分表,单库单表跟分库分表如何join,哪些数据要放到缓存里去啊,放哪些数据再可以抗掉高并发的请求,需要完成对一个复杂业务系统的分析之后,然后逐步逐步的加入高并发的系统架构的改造,这个过程是务必复杂的,一旦做过一次,一旦做好了,在这个市场上就会非常的吃香。

     

    其实大部分公司,真正看重的,不是说掌握高并发相关的一些基本的架构知识,架构中的一些技术,高并发这一块,次一等的人才。对一个有几十万行代码的复杂的分布式系统,一步一步架构、设计以及实践过高并发架构的人,这个经验是难能可贵的

    Protocol protocol = ExtensionLoader.getExtensionLoader(Protocol.class).getAdaptiveExtension();

    @SPI("dubbo")  
    
    public interface Protocol {
    int getDefaultPort();

    @Adaptive
    Exporter export(Invoker invoker) throws RpcException;

    @Adaptive
    Invoker refer(Class type, URL url) throws RpcException;
    ?
    void destroy();
    }

    dubbo = com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol
    
    http
    = com.alibaba.dubbo.rpc.protocol.http.HttpProtocol
    hessian
    = com.alibaba.dubbo.rpc.protocol.hessian.HessianProtocol

    xml version="1.0" encoding="UTF-8"?>
    
    <beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi
    ="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:dubbo
    ="http://code.alibabatech.com/schema/dubbo"
    xsi:schemaLocation
    ="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://code.alibabatech.com/schema/dubbo http://code.alibabatech.com/schema/dubbo/dubbo.xsd">

    <dubbo:application name="dubbo-consumer" />
    ?
    <dubbo:registry address="zookeeper://127.0.0.1:2181" />

    <dubbo:reference id="fooService" interface="com.test.service.FooService" timeout="10000" check="false" mock="return null">
    dubbo:reference>
    beans>

    public class HelloServiceMock implements HelloService {
    
    public void sayHello() {
    // 降级逻辑
    }
    }

    <dubbo:reference id="xxxx" interface="xx" check="true" async="false" retries="3" timeout="2000"/>

    if redis.call("get",KEYS[1]) == ARGV[1] then
    
    return redis.call("del",KEYS[1])
    else
    return 0
    end

    package com.zookeeper.java.distributed_lock;
    
    ?
    import org.apache.zookeeper.CreateMode;
    import org.apache.zookeeper.KeeperException;
    import org.apache.zookeeper.ZooDefs;
    import org.apache.zookeeper.ZooKeeper;
    ?
    import java.util.*;
    import java.util.concurrent.CountDownLatch;
    ?
    /**
    * zk实现分布式锁
    *
    *
    @author zhuliang
    * @date 2019/6/19 12:17
    */
    public class DistributedLock {
    ?
    private ZooKeeper zooKeeper;
    private String root = "/LOCKS";
    private String lockId;
    private int sessionTimeout;
    private byte[] data = {1, 2};
    private CountDownLatch latch = new CountDownLatch(1);
    ?
    public DistributedLock() throws Exception {
    this.zooKeeper = ZookeeperFactory.getInstance();
    this.sessionTimeout = ZookeeperFactory.getSessionTimeout();
    }
    ?
    public boolean lock() {
    //创建新的临时有序节点 将其自动生成的序号返回值作为 锁id
    try {
    lockId
    = zooKeeper.create(root + "/", data, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
    System.out.println(Thread.currentThread().getName()
    + "-> 成功创建的lock节点[" + lockId + "] 开始竞争锁");
    //获取root下所有的子节点
    List children = zooKeeper.getChildren(root, true);
    //利用treeSet排序特性将其排序
    SortedSet sortedSet = new TreeSet<>();
    //并将其元素前面拼接上父节点路径
    for (String s : children) {
    sortedSet.add(root
    + "/" + s);
    }
    //获取最小的节点 如果最小的节点存在 且等于lockId 则可以获取锁
    if (sortedSet.first().equals(lockId)) {
    System.out.println(Thread.currentThread().getName()
    + "-> 成功获得锁 lock节点[" + lockId + "]");
    return true;
    }
    //如果不等于lockId
    SortedSet lessThanLockId = sortedSet.headSet(lockId);
    if (!lessThanLockId.isEmpty()) {
    //获取比当前lockId小的上一个节点 其实这个preLockId就是正在被使用的锁的id
    String preLockId = lessThanLockId.last();
    //然后给这个正在被使用的锁 添加一个watcher 当这个锁被调用delete,get,set的时候就会触发watch时间
    zooKeeper.exists(preLockId, new LockWatcher(latch));
    // latch.await(sessionTimeout, TimeUnit.MILLISECONDS);
    latch.await();
    //上面这段代码意味着如果会话超时或者节点被删除了
    System.out.println(Thread.currentThread().getName() + "-> 成功获得锁[" + lockId + "]");
    }
    return true;
    }
    catch (KeeperException e) {
    e.printStackTrace();
    }
    catch (InterruptedException e) {
    e.printStackTrace();
    }
    return false;
    }
    ?
    ?
    public boolean unlock() {
    System.out.println(Thread.currentThread().getName()
    + "-> 开始施放锁[" + lockId + "]");
    try {
    System.out.println(
    "节点[" + lockId + "]成功被删除");
    zooKeeper.delete(lockId,
    -1);
    return true;
    }
    catch (InterruptedException e) {
    e.printStackTrace();
    }
    catch (KeeperException e) {
    e.printStackTrace();
    }
    return false;
    }
    ?
    ?
    public static void main(String[] args) {
    CountDownLatch latch
    = new CountDownLatch(10);
    Random random
    = new Random();
    for (int i = 0; i < 10; i++) {
    new Thread(() -> {
    DistributedLock lock
    = null;
    try {
    lock
    = new DistributedLock();
    latch.countDown();
    latch.await();
    lock.lock();
    Thread.sleep(random.nextInt(
    3000));
    }
    catch (Exception e) {
    e.printStackTrace();
    }
    finally {
    if (lock != null) {
    lock.unlock();
    }
    }
    }).start();
    }
    }
    }
    /*
    * 最终结果 获得锁->释放锁 是按顺序进行的
    *
    Thread-3-> 成功创建的lock节点[/LOCKS/0000000090] 开始竞争锁
    Thread-2-> 成功创建的lock节点[/LOCKS/0000000091] 开始竞争锁
    Thread-7-> 成功创建的lock节点[/LOCKS/0000000093] 开始竞争锁
    Thread-6-> 成功创建的lock节点[/LOCKS/0000000094] 开始竞争锁
    Thread-0-> 成功创建的lock节点[/LOCKS/0000000092] 开始竞争锁
    Thread-4-> 成功创建的lock节点[/LOCKS/0000000095] 开始竞争锁
    Thread-1-> 成功创建的lock节点[/LOCKS/0000000096] 开始竞争锁
    Thread-8-> 成功创建的lock节点[/LOCKS/0000000097] 开始竞争锁
    Thread-9-> 成功创建的lock节点[/LOCKS/0000000098] 开始竞争锁
    Thread-5-> 成功创建的lock节点[/LOCKS/0000000099] 开始竞争锁
    Thread-3-> 成功获得锁 lock节点[/LOCKS/0000000090]
    Thread-3-> 开始施放锁[/LOCKS/0000000090]
    节点[/LOCKS/0000000090]成功被删除
    Thread-2-> 成功获得锁[/LOCKS/0000000091]
    Thread-2-> 开始施放锁[/LOCKS/0000000091]
    节点[/LOCKS/0000000091]成功被删除
    Thread-0-> 成功获得锁[/LOCKS/0000000092]
    Thread-0-> 开始施放锁[/LOCKS/0000000092]
    节点[/LOCKS/0000000092]成功被删除
    Thread-7-> 成功获得锁[/LOCKS/0000000093]
    Thread-7-> 开始施放锁[/LOCKS/0000000093]
    节点[/LOCKS/0000000093]成功被删除
    Thread-6-> 成功获得锁[/LOCKS/0000000094]
    Thread-6-> 开始施放锁[/LOCKS/0000000094]
    节点[/LOCKS/0000000094]成功被删除
    Thread-4-> 成功获得锁[/LOCKS/0000000095]
    Thread-4-> 开始施放锁[/LOCKS/0000000095]
    节点[/LOCKS/0000000095]成功被删除
    Thread-1-> 成功获得锁[/LOCKS/0000000096]
    Thread-1-> 开始施放锁[/LOCKS/0000000096]
    节点[/LOCKS/0000000096]成功被删除
    Thread-8-> 成功获得锁[/LOCKS/0000000097]
    Thread-8-> 开始施放锁[/LOCKS/0000000097]
    节点[/LOCKS/0000000097]成功被删除
    Thread-9-> 成功获得锁[/LOCKS/0000000098]
    Thread-9-> 开始施放锁[/LOCKS/0000000098]
    节点[/LOCKS/0000000098]成功被删除
    Thread-5-> 成功获得锁[/LOCKS/0000000099]
    Thread-5-> 开始施放锁[/LOCKS/0000000099]
    节点[/LOCKS/0000000099]成功被删除
    */

    <Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />
    
    <Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager"
    host
    ="{redis.host}"
    port
    ="{redis.port}"
    database
    ="{redis.dbnum}"
    maxInactiveInterval
    ="60"/>

    <Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />
    
    <Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager"
    sentinelMaster
    ="mymaster"
    sentinels
    =":26379,:26379,:26379"
    maxInactiveInterval
    ="60"/>

    <dependency>
    
    <groupId>org.springframework.sessiongroupId>
    <artifactId>spring-session-data-redisartifactId>
    <version>1.2.1.RELEASEversion>
    dependency>
    ?
    <dependency>
    <groupId>redis.clientsgroupId>
    <artifactId>jedisartifactId>
    <version>2.8.1version>
    dependency>

    <bean id="redisHttpSessionConfiguration"
    
    class
    ="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration">
    <property name="maxInactiveIntervalInSeconds" value="600"/>
    bean>
    ?

    ?
    <bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
    <property name="maxTotal" value="100" />
    <property name="maxIdle" value="10" />
    bean>
    ?

    ?
    <bean id="jedisConnectionFactory"
    class
    ="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" destroy-method="destroy">
    <property name="hostName" value="${redis_hostname}"/>
    <property name="port" value="${redis_port}"/>
    <property name="password" value="${redis_pwd}" />
    <property name="timeout" value="3000"/>
    <property name="usePool" value="true"/>
    <property name="poolConfig" ref="jedisPoolConfig"/>
    bean>

    <filter>
    
    <filter-name>springSessionRepositoryFilterfilter-name>
    <filter-class>org.springframework.web.filter.DelegatingFilterProxyfilter-class>
    filter>
    ?
    <filter-mapping>
    <filter-name>springSessionRepositoryFilterfilter-name>
    <url-pattern>/*url-pattern>
    filter-mapping>

    @Controller
    
    @RequestMapping(
    "/test")
    public class TestController {
    ?
    @RequestMapping(
    "/putIntoSession")
    @ResponseBody
    public String putIntoSession(HttpServletRequest request, String username){
    request.getSession().setAttribute(
    "name", “leo”);
    return "ok";
    }

    @RequestMapping(
    "/getFromSession")
    @ResponseBody
    public String getFromSession(HttpServletRequest request, Model model){
    String name
    = request.getSession().getAttribute("name");
    return name;
    }
    }

Leave a Comment

Your email address will not be published.