Original address: Liang Guizhao’s blog
Blog address: http://blog.720ui.com
Welcome to pay attention to the official account: “Server Thinking “. A group of people with the same frequency, grow together, diligently together, and break the limitations of cognition.
I haven’t written many articles for a while, and today I wrote about my thoughts on API design. First of all, why write this topic? First, I have benefited a lot after reading the article “Ali Researcher Gu Pu: Reflections on API Design Best Practices”. I reprinted this article two days ago and it also aroused the interest of readers. I think I should put myself His thoughts are organized into articles to share and collide with everyone. Second, I think I can deal with this topic within half an hour, and strive to turn off the lights before 1 o’clock and go to bed, haha.
Now, let’s discuss the design of API together. I will throw out a few points, welcome to discuss.
1. A well-defined specification has been more than half successful
Usually, a specification is a standard established by everyone. , If everyone abides by this set of standards, then the cost of natural communication will be greatly reduced. For example, everyone hopes to learn from Ali’s specifications and define several domain models in their own business: VO, BO, DO, and DTO. Among them, DO (Data Object) has a one-to-one correspondence with the database table structure, and transmits data source objects upward through the DAO layer. And DTO (Data Transfer Object) is a remote call object, which is a domain model provided by RPC services. For BO (Business Object), it is an object that encapsulates business logic in the business logic layer. Generally, it is a composite object that aggregates multiple data sources. Then, VO (View Object) is usually an object transmitted by the request processing layer. After it is transformed by the Spring framework, it is often a JSON object.
Second, Discussion on API Interface Compatibility
API interfaces are constantly evolving. Therefore, we need to adapt to changes to a certain extent. In RESTful API, the API interface should be compatible with the previous version as much as possible. However, in actual business development scenarios, as business needs continue to iterate, the existing API interface cannot support the adaptation of the old version. At this time, if the API interface of the server is forcibly upgraded, the old functions of the client will malfunction. In fact, the web side is deployed on the server, so it can be easily upgraded to adapt to the new API interface of the server side. However, other clients such as Android, IOS, and PC are running on the user’s machine. Therefore, it is difficult for the current product to adapt to the API interface of the new server, resulting in functional failure. In this case, the user must upgrade the product to the latest version to use it normally. In order to solve this version incompatibility problem, a practical approach in designing RESTful API is to use version numbers. Under normal circumstances, we will keep the version number in the url and be compatible with multiple versions at the same time.
[GET] /v1/users/{user_id} // API interface for querying the user list of version v1
[GET] /v2/users/{user_id} // Querying of version v2 API interface for user list
Now, without changing the API interface for querying user list of version v1, we can add the API interface for querying user list of version v2 to meet new business requirements. When the new function of the client’s product will request the API interface address of the new server. Although the server will be compatible with multiple versions at the same time, maintaining too many versions at the same time is a big burden for the server, because the server has to maintain multiple sets of codes. In this case, the common practice is not to maintain all compatible versions, but to maintain only the latest compatible versions, for example, to maintain the latest three compatible versions. After a period of time, when the vast majority of users upgrade to a newer version, the old API interface versions of some servers that are used less frequently are discarded, and users who use the very old version of the product are forced to upgrade. Note that “the API interface for querying the user list of version v1 does not change” mainly refers to the fact that it does not seem to have changed for the caller of the client. In fact, if the business changes too much, the server developer needs to adapt the request to the new API interface using the adapter mode for the old version of the API interface.
Interestingly, GraphQL provides different ideas. In order to solve the problem of service API interface explosion and aggregate multiple HTTP requests into one request, GraphQL proposes to expose only a single service API interface, and multiple queries can be performed in a single request. GraphQL defines an API interface, which we can call more flexibly on the front end. For example, we can select and load the fields that need to be rendered according to different businesses. Therefore, the front end can obtain all the fields provided by the server on demand. GraphQL can add new features by adding new types and new fields based on these types without causing compatibility issues.
Three, provide a clear thinking model
The so-called thinking model, my understanding is to aim at the abstract model of the problem domain, have a unified understanding of the function of the domain model, construct a realistic mapping of a certain problem, and divide it The boundary of a good model, and one of the values of the domain model is to unify thinking and clarify the boundary. Assuming that everyone does not have a clear thinking model, then there is no unified understanding of API, then the real problems in the picture below are likely to occur .
Four, shield business implementation in an abstract way
I think a good API interface is abstract, so business implementation needs to be shielded as much as possible. So, the question is, how do we understand abstraction? In this regard, we can think about the design of java.sql.Driver. Here, java.sql.Driver is a standardized interface, and com.mysql.jdbc.Driver
is the implementation interface of mysql-connector-java-xxx.jar to this specification. Then, the cost of switching to Oracle is very low.
Under normal circumstances, we will provide external services through API. Here, the logic of the interface provided by the API is fixed, in other words, it is universal. However, when we encounter scenarios with similar business logic, that is, the core backbone logic is the same, but the details of the implementation are slightly different, then where should we go? In many cases, we will choose to provide multiple API interfaces for different business parties to use. In fact, we can achieve more elegance through SPI extension points. What is SPI? The full English name of SPI is Serivce Provider Interface, which is a dynamic discovery mechanism that can dynamically discover the implementation class of an extension point during the execution of the program. Therefore, when the API is called, it will dynamically load and call the specific implementation method of SPI.
At this time, do you think of the template method mode? The core idea of the template method pattern is to define the skeleton and transfer the implementation. In other words, it delays the specific implementation of some steps to subclasses by defining a process framework. In fact, this kind of thinking also provides us with a very good theoretical basis in the process of landing microservices.
Now, let’s take a look at a case: In the e-commerce business scenario, only refunds are not shipped. This situation is very common in the e-commerce business, and the user may apply for a refund for various reasons after placing an order and paying. At this time, because there is no return involved, only the user needs to apply for a refund and fill in the reason for the refund, and then let the seller review the refund. Then, because the refund reasons for different platforms may be different, we can consider implementing it through SPI extension points.
instance = new TaskManager();
// Single chat message task
instance.taskMap.put (EventEnum.CHAT_REQ.getValue(), new ChatTask());
// Group Chat Message Task
instance.taskMap.put(EventEnum.GROUP_CHAT_REQ.getValue(), new GroupChatTask());< br /> // Heartbeat task
instance.taskMap.put(EventEnum.HEART_BEAT_REQ.getValue(), new HeatBeatTask());
}
}
< p>Another design for shielding internal complexity is the appearance interface, which encapsulates and integrates the interfaces of multiple services and provides a simple calling interface for the client to use. The advantage of this design is that the client no longer needs to know the interfaces of so many services, only need to call this appearance interface. However, the disadvantages are also obvious, that is, it increases the business complexity of the server, the interface performance is not high, and the reusability is not high. Therefore, we should adjust measures to local conditions and ensure a single responsibility as much as possible, and carry out “Lego-style” assembly on the client side. If there are SEO-optimized products, they need to be included by search engines like Baidu. When the first screen is the first screen, HTML can be generated by server-side rendering, so that it can be included by search engines. If it is not the first screen, you can use the client Call the server-side RESTful API interface for page rendering.
In addition, with the popularity of microservices, we have more and more services, and many smaller services have more cross-service calls. Therefore, the microservice architecture makes this problem more common. To solve this problem, we can consider introducing an “aggregation service”, which is a composite service that can combine data from multiple microservices. The advantage of this design is that some information is integrated through an “aggregation service” and then returned to the caller. Note that the “aggregation service” can also have its own cache and database. In fact, the idea of aggregation services is everywhere, such as the Serverless architecture. We can use AWS Lambda as the computing engine behind the Serverless service in practice, and AWS Lambda is a function-as-a-servcie (FaaS) computing service. We directly write the computing service that runs on the cloud. function. Then, this function can assemble existing capabilities for service aggregation.
Five considerations Behind the performance
We need to consider the various combinations of parameter fields that lead to database performance problems. Sometimes, we may expose too many fields for external use, resulting in the database without the corresponding index and full Table scan. In fact, this situation is particularly common in query scenarios. Therefore, we can only provide indexed field combinations for external calls, or in the following cases, require the caller to fill in taskId and caseId to ensure our database Use the index reasonably to further ensure the service performance of the service provider.
ResultVoid> agree(Long taskId, Long caseId, Configger configger);
At the same time, for report operations, batch operations, APIs such as cold data query should be able to consider asynchronous capabilities.
In addition, although GraphQL solves the problem of aggregating multiple HTTP requests into one request, the schema will recursively obtain all data by parsing layer by layer. For example, paging query Counting the total number of entries, the query that could be handled once has evolved into N + 1 database query. In addition, if it is written unreasonably, it will cause poor performance problems. Therefore, we need to pay special attention to the design process .
Six, Exception Responses and Error Mechanisms
There are already too many exceptions or error codes thrown by the RPC API in the industry Controversy. The “Alibaba Java Development Manual” recommends: Cross-application RPC calls give priority to using isSuccess() method, “error code”, and “error short message”. Reasons for using the Result method as the return method of the RPC method: 1) Use throw exception Return method, if the caller does not catch it, a runtime error will occur. 2) If the stack information is not added, just a new custom exception, adding an error message of your own understanding, will not help the caller to solve the problem too much. If stack information is added, the performance loss of data serialization and transmission is also a problem in the case of frequent call errors. Of course, I also support the practice of this argument. By.
public ResultXxxDTO> getXxx(String param) {
try {
// ...
return Result.create(xxxDTO);
} catch ( BizException e) {
log.error("...", e);
return Result.createErrorResult(e.getErrorCode(), e.getErrorInfo(), true);
}
}
In the Web API design process, we will use ?ControllerAdvice to package error messages uniformly. In the complex chain call of microservices, it is more difficult for us to track and locate problems than a monolithic architecture. Therefore, special attention should be paid when designing. A better solution is to use the global abnormal structure response information when a non-2xx HTTP error code response appears in the RESTful API interface. Among them, the code field is used to indicate the error code of a certain type of error, and the “{biz_name}/” prefix should be added to the microservice in order to locate the business system on which the error occurred. Let’s take a look at a case. Assuming that an interface in the “User Center” does not have permission to obtain resources and an error occurs, our business system can respond to “UC/AUTH_DENIED” and obtain it in the log system through the request_id field of the automatically generated UUID value. The details of the error.
HTTP/1.1 400 Bad Request
Content-Type: application/json
{
"code": "INVALID_ARGUMENT",
"message": " {error message}",
"cause": "{cause message}",
"request_id": "01234567-89ab-cdef-0123-456789abcdef",
"host_id": " {server identity}",
"server_time": "2014-01-01T12:00:00Z"
}
7. Thinking about the idempotence of API
The core of the idempotent mechanism is to ensure the uniqueness of resources. For example, repeated submissions by the client or multiple retries by the server will only produce one result. Payment scenarios, refund scenarios, and transactions involving money cannot have multiple deductions. In fact, the query interface is used to obtain resources, because it only queries data without affecting the changes of resources. Therefore, no matter how many times the interface is called, the resources will not change, so it is idempotent. The new interface is non-idempotent, because calling the interface multiple times will cause resource changes. Therefore, we need to perform idempotent processing when repeated submissions occur. So, how to ensure the idempotent mechanism? In fact, we have many implementation options. Among them, one solution is to create a common unique index. Create a unique index in the database for the resource fields that we need to constrain, which can prevent the insertion of duplicate data. However, in the case of sub-database and sub-table, the unique index is not so easy to use. At this time, we can query the database first, and then determine whether the constrained resource field has duplicates, and insert operations if there are no duplicates. . Note that in order to avoid concurrency scenarios, we can guarantee the uniqueness of data through locking mechanisms, such as pessimistic locking and optimistic locking. Here, distributed lock is a frequently used solution, it is usually a pessimistic lock implementation. However, many people often regard pessimistic locks, optimistic locks, and distributed locks as solutions to idempotent mechanisms. This is incorrect. In addition, we can also introduce a state machine, and use the state machine to perform state constraints and state jumps to ensure the procedural execution of the same business, thereby achieving data idempotence. In fact, not all interfaces must be guaranteed to be idempotent. In other words, whether or not an idempotent mechanism is required can be determined by considering whether or not to ensure resource uniqueness. For example, behavior logs may not consider idempotence. Of course, there is another design solution that the interface does not consider the idempotent mechanism, but is guaranteed at the business level when the business is implemented. For example, multiple copies of data are allowed, but the latest version is obtained for processing during business processing.
(End, please indicate the author and source for reprinting.)
Write at the end
[Server Thinking]: We Let’s talk about the core technology of the server, and discuss the project architecture and practical experience of the first-line Internet. At the same time, the big “back-end circle” family with many technical experts is looking forward to your joining. A group of people with the same frequency will grow together and improve together, breaking the limitations of cognition.
More wonderful articles, all in “server thinking”!