Nginx performance optimization

Improve concurrent connections

Adjust the number of worker processes

Improve the affinity of the CPU, bind the CPU to the process, reduce the switching between processes, and avoid uneven use of CPU resources

  1. worker_processes auto; #Set the number of nginx worker processes, auto will automatically be set to the same number of CPU cores
  2. worker_cpu_affinity 0001 0010 0100 1000; # Bind the CPU to the nginx process, 0001, 0010 , 0100, and 1000 respectively represent the 1, 2, 3, 4 core CPU

Modify the file handle

Modify the maximum number of handles opened by the user

vim /etc/security/limits.conf 
* soft nofile 65535
* hard nofile 131072

Modify the maximum number of handles opened by nginx

worker_rlimit_nofile 65535; 

Modify the event processing model

  1. use epoll; #Use the epoll event processing model
  2. worker_connections 1024; #The maximum number of connections allowed by a single worker process< /li>
  3. multi_accept on; # Tell nginx to accept as many connections as possible after receiving a new connection notification. The default is on. After setting to on, multiple workers will process the connection in a serial manner, that is, a connection Only one worker is awakened, and the others are in a dormant state. When set to off, multiple workers process connections in parallel, that is, one connection will wake up all workers until the connection is allocated, and the ones that have not obtained the connection will continue to sleep. When your server has a small number of connections, enabling this parameter will reduce the load to a certain extent, but when the server throughput is large, you can turn off this parameter for efficiency.

Enable efficient transfer mode

  1. sendfile on; #sendfile is a data copy function acting between two file descriptors, enabling efficient file transfer mode , The sendfile instruction specifies whether nginx calls the sendfile function to output files. For normal applications, it is set to on. If it is used for downloading and other applications with heavy disk IO load applications, it can be set to off to balance the disk and network I/O processing speed and reduce The load of the system.

    Note: If the picture is not only displayed, often change this to off.

  2. tcp_nopush on; it must be enabled in sendfile mode to prevent network and disk I/O congestion, and actively reduce the number of network message segments (the response header and body The beginning part of the, instead of sending one after another.)

Connection timeout

The main purpose is to protect server resources, CPU, memory, and control connections. Number, because the establishment of a connection also needs to consume resources

keepalived_timeout 60; #Client connection keeps the session timeout time, beyond this time, the server disconnects the link tcp_nodelay on; #Also to prevent network congestion, but to forgive The keepalived parameter is only valid.
client header buffer size 4k; #The buffer size of the client request header. This can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but because Generally, the paging of the system is larger than 1k, so the paging size is set here. The page size can be obtained with the command getconf PAGESIZE.
open_file cache max=102400 inactive=20s; #This will specify the cache for open files. It is not enabled by default. Max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long the file has been unavailable. Delete the cache after being requested.
open_file_cache_valid 30s; #This refers to how often to check the cached valid information.
open_file_cache_min_uses 1; #open_file_cache The minimum number of times the file is used during the inactive parameter time of the instruction. If this number is exceeded, the file descriptor is always opened in the cache. As in the above example, if there is a file once in the inactive time If it is not used, it will be removed.
client_header_timeout 15; #Set the timeout of the request header. We can also set this lower. If no data is sent for more than this time, nginx will return the request time out error
client_body_timeout 15; #Set the timeout time of the request body. We can also set this setting lower. If no data is sent for more than this time, the same error message as above is displayed.
reset_timeout_connection on; #Tell nginx to close unresponsive client connections. This will release the memory space occupied by that client.
send_timeout 15; Respond to the client timeout time. This timeout time is limited to the time between two activities. If it exceeds this time, the client has no activity and nginx closes the connection.
server_tokens off; #不It will make nginx execute faster, but it can turn off the nginx version number in the error page, which is good for security.
client_max_body_size 10m; #Upload file size limit

fastcgi tuning

fastcgi_connect_timeout 600; #Specify the timeout period for connecting to the backend FastCGI. 
fastcgi_send_timeout 600; #The timeout period for sending requests to FastCGI.
fastcgi_read_timeout 600; #Specify the timeout period for receiving FastCGI responses.
fastcgi_buffer_size 64k; #Specify how large a buffer is needed to read the first part of the FastCGI response. The default buffer size is the size of each block in the fastcgi_buffers instruction. This value can be set smaller.
fastcgi_buffers 4 64k; #Specify how much and how big a buffer is needed locally to buffer the FastCGI response request. If the page size generated by a php script is 256KB, then 4 64KB buffers will be allocated for caching. If the page size is greater than 256KB, then the part larger than 256KB will be cached in the path specified by fastcgi_temp_path, but this is not a good method because the data processing speed in memory is faster than disk. Generally, this value should be the middle value of the page size generated by the php script in the site. If the page size generated by most of the site scripts is 256KB, then this value can be set to "8 32K", "4 64k" and so on.
fastcgi_busy_buffers_size 128k; #It is recommended to set it to twice the fastcgi_buffers, the busy buffer
fastcgi_temp_file write_size 128k; #How large data blocks will be used when writing fastcgi_temp_path, the default value is twice the fastcgi_buffers, this If the value is set when the load comes up, it may report 502BadGateway
fastcgi_temp_path /usr/local/nginx/nginx_tmp; #Cache Temporary Directory
fastcgi_intercept_errors on; #This instruction specifies whether to transmit 4xx and 5xx error messages to the client. Or allow nginx to use error_page to handle error messages.
Note: If the static file does not exist, it will return a 404 page, but the php page will return a blank page! !
fastcgi cache_path/usr/local/nginx1.10/fastcgi cachelevels=1:2 keys_zone=cache_fastcgi:128 minactive=1d max_size=10g; #fastcgi_cache cache daily record, you can set the directory level, such as 1:2 will generate 16*256 subdirectories, cache_fastcgi is the name of the cache space, c ache is how much memory is used (so that the popular content nginx is directly put in memory to improve access speed), inactive means the default expiration time, if the cached data is not Access, will be deleted, max_size indicates the maximum amount of hard disk space used.
fastcgi_cache cache_fastcgi; # means to open FastCGI cache and assign a name to it. Enabling the cache is very useful, can effectively reduce the load of the CPU, and prevent the release of 502 errors. cache_fastcgi is the name of the cache area created by the proxy_cache_path instruction
fastcgi_cache_valid 200 302 1h; #Used to specify the cache time of the response code, the value in the example indicates that the 200 and 302 responses will be cached for one hour, and should be used in conjunction with fastcgi_cache
fastcgi_cache_valid 301 1d; #Cache 301 responses for one day
fastcgi_cache_valid any 1m; #Cache other responses for 1 minute
fastcgi_cache_min_uses 1; #This instruction is used to set how many times the same URL requested will be cached .
fastcgi_cache key http://$host$request_uri; #This instruction is used to set the key value of web cache, nginx stores according to the key value md5 hash. Generally according to host (domain name), $request_uri (request path) And other variables are combined into proxy_cache_key.
fastcgi_pass; #Specify the listening port and address of the FastCGI server, which can be local or other

gzip tuning

Using the gzip compression function may save us bandwidth and speed up The transmission speed has a better experience and saves costs for us, so this is a key point.
Nginx requires you to use the ngx_http_gzip_module module to enable the compression function. Apache uses mod_deflate. Generally, the content we need to compress is: text, js, html, css, for pictures, videos, flash, etc. do not compress, but also pay attention , We use the function of gzip to consume CPU

gzip_vary.on;
gzip_proxied any;
gzip on;#Enable the compression function
gzip_min_length 2k; #Settings allowed The minimum number of bytes of the compressed page. The number of page bytes is obtained from the Content-
Length of the header. The default value is 0. No matter how large the page is, it is compressed. It is recommended to set it to greater than 1K. If it is less than 1K, it may be more The greater the pressure.
gzip_buffers 432k;#Compression buffer size, means that 4 units of 32K memory are used as compression result stream buffer. The default value is to apply for the same memory space as the original data to store gzip compression results.
gzip_http_version 1.1;#Compression version, used to set the recognition HTTP protocol version, the default is 1.1, currently most browsers already support GZIP decompression, use the default gzip_comp_level 6;#Compression ratio, used to specify the GZIP compression ratio , 1 has the smallest compression ratio and the fastest processing speed, 9 has the largest compression ratio, and the transmission speed is fast, but the processing is slow and consumes more CPU resources.
gzip types text/css text/xml application/javascript; #Used to specify the type of compression, ‘text/htm1’ type will always be compressed.
Default value: gzip_types text/html (js/css files are not compressed by default)
#Compression type, matching MIME graphics for compression
#Cannot use wildcard text/*
#(Regardless of whether you specify it or not) text/html has been compressed by default
#For setting which type of compressed text file, please refer to conf/mime.types
gzip_vary on; #varyheader support, this option allows the front-end cache server to cache Pages compressed by GZIP, such as Squid caching data compressed by nginx

expires cache tuning

Caching, mainly for pictures, css, js and other elements with less chance to change In the case of use, especially pictures, take up a lot of bandwidth, we can set the picture to be cached locally in the browser 365d, css, js, html can be cached for 10 days, so that the user opens and loads slowly the first time, and the second time, It’s very fast! When caching, we need to list the extensions that need to be cached. Expires cache is configured in the server field

location ~* \.(ico|jpg|gif|png|bmp|swf|flv)$ {
expires 30d;
# log_not_found off;
access_log off;
}
location ~* \.(js|css)$ {
expires 7d ;
log_not_found off;
access_log off;
}

Note: 1og_not_found off; whether to record non-existent errors in error_log. The default is.

Summary:

Expire feature advantages:

  1. Expires can reduce the bandwidth purchased by the website and save costs
  2. At the same time to improve user access experience
  3. Reducing the pressure of services and saving server costs are very important functions of web services.

Expire function shortcomings:

The cached page or data is updated, and the user may still see the old content, which will affect the user experience. Solution: The first is to shorten the cache time, for example: 1 day, but not completely, unless the update frequency is greater than 1 day; the second is to rename the cached object.

What the website does not want to be cached:

1) Website traffic statistics tool
2) Frequently updated files (google logo)

Kernel Optimization

  1. fs.file-max =999999: This parameter indicates the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter linearly limits the maximum number of concurrent connections, which depends on the actual situation. Configuration.
  2. net.ipv4.tcp_max_tw_buckets=6000#This parameter indicates the maximum number of TIME_WAIT sockets allowed by the operating system. If this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning message will be printed. This parameter defaults to 180000. Too many TIME_WAIT sockets will slow down the Web server.

    Note: The server that actively closes the connection will generate a connection in the TIME_WAIT state

  3. net.ipv4.ip_local_port_range=1024 65000 #Allow the system The range of open ports.
  4. net.ipv4.tcp_tw_recycle=1 #Enable timewait fast recycling.
  5. net.ipv4.tcp_tw_reuse=1 #Enable reuse. Allow TIME-WAIT sockets to be reused for new TCP connections.
  6. This makes sense for the server, because there will always be a large number of TIME-WAIT connections on the server.
  7. net.ipv4.tcp keepalive_time=30 #This parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours. If it is set smaller, invalid connections can be cleaned up faster.
  8. net.ipv4.tcp_syncookies =1 #Enable SYN Cookies. When the SYN waiting queue overflows, enable cookies to process.
  9. net.core.somaxconn=40960 #The backlog of the listen function in the web application will by default give us the net.core.somaxconnwwwaqniuktcom wanger limit of the kernel parameter to 128, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so there is It is necessary to adjust this value.
  10. Note: For a TCP connection, Server and client need to establish a network connection through a three-way handshake. When the three-way handshake is successful, we can see that the status of the port changes from LISTEN to ESTABLISHED, and then data can be transmitted on this link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to the somaxconn parameter and the listen() function in the program that uses the port.
  11. The somaxconn parameter: defines the length of the maximum listening queue for each port in the system. This is a global parameter. The default value is 128. For a high-load web service environment that frequently processes new connections, the default 128 is too small. It is recommended to increase this value to 1024 or more in most environments. Large listening queues can also help prevent denial of service DoS***.
  12. net.core.netdev_max backlog=262144 #When each network interface receives data packets faster than the kernel processing these packets, the maximum number of data packets allowed to be sent to the queue.
  13. net.ipv4.tcp_max_syn_backlog=262144 #This parameter indicates the maximum length of the SYN request queue accepted during the establishment of the TCP three-way handshake. The default is 1024. Setting it larger can make Nginx busy before accepting new connections. Under the circumstances, Linux will not lose the connection request initiated by the client.
  14. net.ipv4.tcp_rmem=10240 87380 12582912 #This parameter defines the minimum, default and maximum values ​​of TCP acceptance buffer (used for TCP acceptance sliding window).
  15. net.ipv4.tcp_wmem=10240 87380 12582912 #This parameter defines the minimum, default and maximum values ​​of TCP sending buffer (used for TCP sending sliding window).
  16. net.core.rmem_default=6291456 #This parameter indicates that the kernel socket accepts the default size of the buffer area.
  17. net.core.wmem_default=6291456 #This parameter indicates the default size of the kernel socket sending buffer.
  18. net.core.rmem_max=12582912 #This parameter indicates the maximum size of the kernel socket receiving buffer.
  19. net.core.wmem_max=12582912 #This parameter indicates the maximum size of the kernel socket sending buffer.
  20. net.ipv4.tcp_syncookies=1 #This parameter has nothing to do with performance and is used to solve TCP’s SYN***.

Welcome to follow the personal public account “Master Chen without a story”
nginx performance optimization

Leave a Comment

Your email address will not be published.