Nginx is a popular web server and content management system. It can be used to serve web pages and to manage the content of websites. Nginx rate limiting is a feature that can be used to control the number of requests that a website makes per minute. Rate limiting can be used in two ways: by setting a limit on the number of requests that a website can make, or by setting a limit on the amount of time that a website can stay open. Setting Limits on Requests per Minute To set limits on requests per minute, use the nginx command line tool. To set a limit on the number of requests that a website can make in one minute, use the -r flag: nginx -r 1000000 This will set a limit of 100,000 requests per minute. To set limits on how long an individual request can take, use the -t flag: nginx -t 10s


Rate limiting controls how many requests users can make to your site. This is usually put in place to stop abusive bots, limit login attempts, and control API usage, which can prevent your server from slowing down under load.

Rate limiting can’t always save you from massive traffic spikes, so if your server really needs protection it’s good practice to set up a full-site CDN in front, or at least set up HAProxy load balancing to split the load across multiple servers.

How to Enable Rate Limiting in Nginx

First, we must define a rate limiting “zone.” You can have multiple zones set up, and assign different locations blocks to each zone. For now, let’s create a basic zone by adding the following line to your server or http context block:

The limit_req_zone command defines a zone using $binar_remote_addr as the identifier. This is the IP address of the client, but you could also use something like $server_name to limit it per server.

The zone flag names the zone (in this case, “foo”), and allocates a memory block for the zone. Nginx needs to store IP addresses to check against, so it needs memory for each zone. In this case, 10m allocates 10 megabytes of memory, enough for 160,000 connections per second (which you are likely never going to see on a single server).

The final flag is the rate, which defines the default number of connections each client is allowed. Here it is set to 5 requests per second, with 10 being the maximum, though you can set it slower by formatting it as 30r/m (for 30 requests per minute).

Once the zone is configured, it’s time to make use of it.

The limit_req directive does the heavy lifting, and assigns a location block to a limiting zone. The burst parameter gives the client some wiggle room, and allows them to make extra requests so long as they don’t exceed the rate on average.

Under the hood, it adds burst requests to a “queue” that ticks down every 100ms. This can make your site appear slow, so the nodelay parameter removes this queue delay. With the current config, if you made 10 requests all at once, the nodelay parameter would allow all 10 requests, then rate limit the following requests at 5 requests per second. If you made 6 more requests, it would allow 5, and reject the 6th as it’s over the limit. Once the client stops making requests, the queue ticks down at a speed depending on your rate.

Two-Stage Rate Limiting

By manually setting the delay variable, it’s possible to allow a few requests to have no delay while the rest have to wait in the queue. This forms a two-stage rate limit, where you want the initial requests to be very fast, follow-up requests to be slowed a bit, then kick in the rate limit.

This is done by assigning a delay value on the limit_req directive:

Here, the first 5 requests will come through instantly. The client is then allowed 5 more requests every 100ms until the burst fills up, after which, they’re limited by the rate variable.

Rate Limiting Bandwidth

Limiting requests will block out most malicious attacks, but you might want to limit download speed so that users don’t slow your server down by downloading a lot of files.

You can do this with the limit_rate directive, which doesn’t need a limiting zone configured for it.

This sets the maximum download speed to 100 Kbps after 1 megabyte has been downloaded. However, this is measured per connection, and users can open multiple connections. To solve this, you’ll need to add a connection limiting zone next to the request limiting zone:

This makes a 10 megabyte zone called “bar” that tracks based on IP address. You can use this alongside a limit_conn directive to enable connection limiting.

Because most browsers open up multiple connections when doing normal browsing, we’ll want to set the global connection limit higher, then set the limit to 1 connection for downloading.