Optimization reference for Nginx in high concurrency scenarios

V Records 5,155 Views , , No comment

Nginx often causes some performance bottlenecks because of high concurrent connections. Here are some optimization configurations for your reference.
The main focus here is on the optimization of the NGINX configuration.

This is just a reference, and it needs to be constantly adjusted in practice.

Nginx.conf configuration


The number of nginx processes, it is recommended to specify according to the number of cpu, generally the same as the cpu core number or a multiple of it.


Assign cpu to each process. In the above example, 8 processes are allocated to 8 cpus. Of course, you can write multiple or assign one process to multiple cpus.


The following instruction refers to the maximum number of file descriptors that are opened when an nginx process is opened. The theoretical value should be the maximum number of open files (ulimit -n) of the system divided by the number of nginx processes, but the nginx allocation request is not so uniform, so the most Good to match the value of ulimit -n.


Use Epoll’s I/O model to efficiently handle asynchronous events.


Set the maximum number of connections allowed per process. In theory, the maximum number of connections per nginx server is worker_processes*worker_connections.


Adjust the http connection timeout time as needed. The default is 60s. This function is to make the client-to-server connection last for a set period of time. When a subsequent request to the server occurs, the function avoids establishing or re-establishing. connection. Remember that this parameter can’t be set too large! Otherwise it will cause many invalid http connections to occupy the number of nginx connections, and the final nginx crashes!


The client requests the buffer size of the header. This can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k. However, since the general system paging is larger than 1k, it is set to the paging size. The page size can be obtained with the command getconf PAGESIZE.


The following parameter will specify the cache for the open file. The default is not enabled. max specifies the number of caches. It is recommended to match the number of open files. Inactive refers to how long the file is deleted after the file is not requested


The following is how long it takes to check the cache for valid information.


The inactive parameter in the open_file_cache directive indicates the minimum number of times a file is used per unit of time. If this number is exceeded, the file descriptor will always be opened in the cache. As in the above example, if a file has never been used in inactive time. , then it will be removed.


Hiding the information about the operating system and web server (Nginx) version number in the response header is good for the security of the WEB server.


You can make sendfile() work. Sendfile() can copy data (or any two file descriptors) between disk and TCP socket. The Pre-sendfile is the data buffer requested in the user space before the data is transferred. Then use read() to copy the data from the file to this buffer, and write() to write the buffer data to the network. Sendfile() immediately reads data from disk to the OS cache. Because this copy is done in the kernel, sendfile() is more efficient than combining read() and write() and opening and dropping the buffer.


Tells nginx to send all header files in one packet, not one after another. That is to say, the data packet will not be transmitted immediately, and when the data packet is the largest, it will be transmitted once, which will help solve the network congestion.


Tell nginx not to cache data, but to send it in a piece of time — when you need to send data in time, you should set this property to the application, so you can’t get the return value immediately when sending a small piece of data.

such as:

15.client_header_buffer_size 4k;

The size of the buffer of the client request header, this can be set according to the system paging size, generally the size of a request header will not exceed 1k, but since the general system paging is greater than 1k, it is set to the paging size.

The page size can be obtained with the command getconf PAGESIZE.

However, there may be cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size must be set to an integer multiple of “system paging size”.


To specify a cache for opening files, the default is not enabled. max specifies the number of caches. It is recommended to match the number of open files. Inactive refers to how long the file is deleted after the file is not requested.


Specifies how often to check for valid information for the cache.

18.A complete nginx.conf example reference:

FastCGI instruction


This directive specifies a path for the FastCGI cache, the directory structure level, the key area storage time, and the inactive delete time.


Specifies the timeout for connecting to the backend FastCGI.


The timeout period for transmitting a request to FastCGI. This value is the timeout period for sending a request to FastCGI after two handshakes have been completed.


Timeout period for receiving a FastCGI response. This value is the timeout period for receiving a FastCGI response after two handshakes have been completed.


Specifies how much buffer to use to read the first part of the FastCGI response. This can be set to the buffer size specified by the fastcgi_buffers directive. The above command specifies that it will use a 16k buffer to read the first part of the response, which is the response header. In fact, this response header is generally small (no more than 1k), but if you specify the size of the buffer in the fastcgi_buffers directive, it will also allocate a buffer size specified by fastcgi_buffers to cache.


Specify how many and how large the buffer needs to buffer the FastCGI response. As shown above, if a php script produces a page size of 256k, it will allocate 16 16k buffers for caching. If it is greater than 256k, The part that is increased by 256k will be cached in the path specified by fastcgi_temp. Of course, this is not a sensible solution for the server load, because the data is processed faster in the memory than the hard disk. Usually, the setting of this value should be selected in your site. The intermediate value of the page size generated by the php script, such as the page size generated by most scripts of your site is 256k, you can set this value to 16 16k, or 4 64k or 64 4k, but obviously, the latter two It’s not a good way to set up, because if the generated page is only 32k, if it uses 4 64k it will allocate a 64k buffer to cache, and if you use 64 4k it will allocate 8 4k buffers to cache, and if Using 16 16k, it will allocate 2 16k to cache the page, which seems more reasonable.


The default value is twice of fastcgi_buffer

How many data blocks will be used when writing fastcgi_temp_path, the default value is twice of fastcgi_buffers


Open the FastCGI cache and give it a name. Personally feel that opening the cache is very useful, can effectively reduce the CPU load, and prevent 502 errors. But this cache can cause a lot of problems because it caches dynamic pages. The specific use also needs to be based on your own needs.


Specify the cache time for the specified response code. In the above example, the 200, 302 response buffer is one hour, the 301 response buffer is one day, and the others are one minute.


The minimum number of times the cache is cached within the value of the inactive parameter of the fastcgi_cache_path directive. For example, if a file is not used once in 5 minutes, the file will be removed.


Define which cases use expired cache
fastcgi_cache_use_stale : fastcgi_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_503 | http_403 | http_404 | off …;

FastCGI separate optimized configuration

FastCGI itself also has some configuration that needs to be optimized. If you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:

The number of concurrent requests processed at the same time, it will open up to 60 child threads to handle concurrent connections.

The maximum number of open files.

The maximum number of requests that each process can execute before resetting.

This article was first published by V on 2018-10-11 and can be reprinted with permission, but please be sure to indicate the original link address of the article :http://www.nginxer.com/records/optimization-reference-for-nginx-in-high-concurrency-scenarios/

Leave a Reply

Your email address will not be published. Required fields are marked *