Introduction to Nginx

The production of Nginx

Have you heard of Nginx? Then you must have heard of its “peer” Apache! Nginx is a kind of WEB server like Apache. Based on the REST architecture style, using Uniform Resources Identifier URI or Uniform Resources Locator URL as the basis for communication, various network services are provided through the HTTP protocol.

However, these servers were limited by the environment at the beginning of the design, such as the user scale, network bandwidth, product features and other limitations at the time, and their respective positioning and development were different. This also makes each WEB server have its own distinct characteristics.

Apache has a long development period, and it is undoubtedly the world’s largest server. It has many advantages: stable, open source, cross-platform and so on. It has been around for too long, and the Internet industry is far behind the present in the era when it emerged. So it is designed to be a heavyweight. It does not support highly concurrent servers. Running tens of thousands of concurrent accesses on Apache will cause the server to consume a lot of memory. Switching between processes or threads by the operating system also consumes a lot of CPU resources, resulting in a decrease in the average response speed of HTTP requests.

These all determine that Apache cannot become a high-performance WEB server, and the lightweight and high-concurrency server Nginx came into being.

Igor Sysoev, an engineer from Russia, used the C language to develop Nginx while working for Rambler Media. As a WEB server, Nginx has been providing excellent and stable services for Rambler Media.

Next, Igor Sysoev open sourced the Nginx code and granted it a free software license.

Because:

  • Nginx uses an event-driven architecture, which enables it to support millions of TCP connections
  • Highly modular and Free software licenses make third-party modules emerge endlessly (this is an open source era~)
  • Nginx is a cross-platform server that can run on Linux, Windows, FreeBSD, Solaris, AIX, Mac OS and other operating systems上
  • These excellent designs bring great stability

So, Nginx is on fire!

Where Nginx comes in ; Nginx can be used as an HTTP server for website publishing, and Nginx can be used as a reverse proxy for load balancing.

About agency

Speaking of agency, we must first clarify a concept. The so-called agency is a representative and a channel;

At this time, there are two Roles, one is the agent role and the other is the target role. The process by which the agent accesses the target role and completes some tasks through this agent is called the agent operation process; just like a specialty store in life ~ a guest buys a pair of shoes at an adidas specialty store. This store is the agent, the role of the agent is the adidas manufacturer, and the target role is the user.

Forward proxy

Before we talk about reverse proxy, let’s take a look at forward proxy. Forward proxy is also the proxy model most frequently encountered by everyone. We will start with two On the one hand, regarding the processing mode of the forward agent, we will explain what a forward agent is from the software side and the life side respectively.

In today’s network environment, if we need to visit some foreign websites due to technical needs, you will find that a foreign website is not accessible through a browser. At this time Everyone may use an operation FQ for access. The main method of FQ is to find a proxy server that can access foreign websites. We send the request to the proxy server, the proxy server visits the foreign website, and then passes the accessed data to us !

The above-mentioned proxy mode is called forward proxy. The biggest feature of forward proxy is that the client is very clear about the server address to be accessed; the server only knows which proxy server the request comes from, but not which specific The client; the forward proxy mode shields or hides the real client information. Let’s take a look at a schematic diagram (I frame the client and forward proxy together, and they belong to the same environment, and I will introduce them later):

share picture

The client must set up a forward proxy server. Of course, the premise is to know the IP address of the forward proxy server and the port of the proxy program. As shown in the figure.

Share pictures

In summary: forward proxy “It is a proxy for the client and makes requests on behalf of the client.” It is a server located between the client and the origin server. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target (Original server), then the proxy forwards the request to the original server and returns the obtained content to the client. The client must make some special settings to use the forward proxy.

The purpose of the forward proxy:
(1) Access to previously inaccessible resources, such as Google
(2) It can be cached to speed up access to resources
(3) To the client Access authorization, online authentication
(4) The agent can record user access records (online behavior management), and hide user information from the outside

Reverse proxy

Understand what is forward Agents, let’s continue to look at the handling of reverse agents. For example, a certain treasure website of my great dynasty, the number of visitors connected to the website at the same time every day has exploded, and a single server is far from satisfying the people’s growing desire to buy. At that time, there was a familiar term: distributed deployment; that is, the problem of limiting the number of visitors is solved by deploying multiple servers; most of the functions in a treasure website are also implemented directly using Nginx for reverse proxy, and through encapsulation After Nginx and other components have a tall name: Tengine, interested children can visit Tengine’s official website to view specific information: http://tengine.taobao.org/. So in what way does the reverse proxy realize the distributed cluster operation? Let’s first look at a schematic diagram (I put the server and the reverse proxy together, they belong to the same environment, I will introduce it later):

share picture

You can see clearly through the above diagram. After the Nginx server receives the requests sent by multiple clients to the server, they are distributed to the back-end business processing server for processing according to certain rules. At this point, the source of the request, which is the client, is clear, but it is not clear which server handles the request. Nginx acts as a reverse proxy.

The client is unaware of the existence of a proxy, and the reverse proxy is transparent to the outside, and visitors do not know that they are accessing a proxy. Because the client can access without any configuration.

Reverse proxy, “it acts as a proxy for the server and receives requests on behalf of the server”. It is mainly used in the case of distributed deployment of server clusters. The reverse proxy hides the information of the server.

The role of the reverse proxy:
(1) To ensure the security of the internal network, usually the reverse proxy is used as the public network access address, and the Web server is the internal network
(2) Load balancing, Optimize the load of the website through a reverse proxy server

Project scenario

Usually, when we actually operate the project, forward proxy and reverse proxy are likely to exist in In an application scenario, the forward proxy proxy client’s request to access the target server, the target server is a reverse simple interest server, and reverse proxy multiple real business processing servers. The specific topology is as follows:

Share pictures

The difference between the two

I cut a picture to illustrate the difference between the forward proxy and the reverse proxy, as shown in the figure.

Share a picture

Illustration:

< p>In the forward proxy, Proxy and Client belong to the same LAN (in the box in the figure), which hides the client information;

In the reverse proxy, Proxy and Server belong to the same LAN (Figure In the middle box), the server information is hidden;

In fact, what Proxy does in the two proxies is to send and receive requests and responses on behalf of the server, but the structure is just interchangeable. Click, so the proxy method that appears later is called a reverse proxy.

Load Balancing

We have clarified the concept of the so-called proxy server, then next, Nginx plays the role of a reverse proxy server, based on what kind of rules does it make requests Is it distributed? Can the distribution rules be controlled for unused project application scenarios?

The number of requests sent by the client and received by the Nginx reverse proxy server mentioned here is the amount of load we are talking about.

The rule that the number of requests is distributed to different servers for processing according to certain rules is a balancing rule.

So ~ the process of distributing the requests received by the server according to the rules is called load balancing.

In the actual project operation process, load balancing has hardware load balancing and software load balancing. Hardware load balancing is also called hard load, such as F5 load balancing, which is relatively expensive and expensive, but the data The stability, security, etc. are very good guarantees. Companies such as China Mobile and China Unicom will choose hard load for operation; more companies will choose to use software load balancing for cost reasons, and software load balancing is to use A message queue distribution mechanism implemented by the existing technology combined with host hardware.

share picture

The load balancing scheduling algorithm supported by Nginx is as follows:

  1. Weight polling (default, commonly used): The received request is distributed to different back-end servers according to the weight, even in the process of use If a back-end server is down, Nginx will automatically remove the server from the queue, and the request acceptance will not be affected in any way. In this way, a weight value (weight) can be set for different back-end servers to adjust the allocation rate of requests on different servers; the larger the weight data, the greater the probability of being allocated to the request; the weight value, It is mainly adjusted for different back-end server hardware configurations in the actual working environment.
  2. ip_hash (commonly used): Each request is matched according to the hash result of the initiating client’s ip. Under this algorithm, a client with a fixed ip address will always access the same back-end server, which is also To a certain extent, the problem of session sharing in the cluster deployment environment is solved.
  3. fair: Intelligently adjust the scheduling algorithm, dynamically allocate the time from the request processing to the response of the back-end server, and the server with short response time and high processing efficiency has a high probability of being allocated to the request, and the response time is long. Inefficient servers allocate fewer requests; a scheduling algorithm that combines the advantages of the first two. But it should be noted that Nginx does not support fair algorithm by default. If you want to use this scheduling algorithm, please install the upstream_fair module.
  4. url_hash: Allocate requests according to the hash result of the visited url. Each requested url will point to a fixed backend server, which can improve the cache efficiency when Nginx is used as a static server. Also note that Nginx does not support this scheduling algorithm by default. If you want to use it, you need to install the Nginx hash software package.

Comparison of several commonly used web servers

Comparison items\server< /td>

Apache Nginx Lighttpd
Proxy proxy Very good Very good General
Rewriter Good Very good General
Fcgi Not good Good Very good
Hot deployment Not supported Supported Not supported
System pressure Very large Small Relatively small
Stability Good Very good Not good
Security Good General General
Static files Processing General Very good Good
Reverse proxy General Very good General

   So far, please indicate Provenance.

[Related links on this site:Nginx, a high-performance HTTP and reverse proxy server, Nginx installation and deployment configuration, and Nginx and uWSGI boot-up self-starting, based on Linux and Nginx Django deployment]

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 4615 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.