Back to Blog

What Nginx Can and Cannot Stop

Nginx operates at Layer 7 (HTTP/HTTPS) and Layer 4 (TCP connection level). Its DDoS mitigation capabilities are genuine and powerful for application-layer attacks. But it has hard limits that matter for your architecture decisions.

Nginx CAN stop:

  • HTTP request floods (too many requests per second from one IP)
  • Slowloris attacks (too many slow connections held open)
  • HTTP flood with distributed source IPs (via request rate zones and response timeouts)
  • Specific bad user agents, request patterns, or headers
  • Connection flooding from single or small groups of IPs

Nginx CANNOT stop:

  • UDP floods (Nginx does not handle UDP)
  • SYN floods at the TCP level (handled by the kernel, not Nginx)
  • Attacks that saturate your network uplink before packets reach Nginx
  • Very large volumetric floods that exhaust your server's CPU or memory handling the connection overhead

For UDP floods and volumetric attacks, you need iptables/nftables rules and upstream scrubbing. For HTTP floods and connection abuse, Nginx's built-in modules are the right tool.

Rate Limiting with ngx_http_limit_req_module

The limit_req module limits the request rate per IP address using a leaky bucket algorithm. This is the primary tool for HTTP flood mitigation.

# In the http {} block, define a rate limiting zone
# $binary_remote_addr is more memory-efficient than $remote_addr
# Zone "api_limit" stores state for unique IPs, 10m = 10MB (~160K IPs)
# Rate: 30 requests per second per IP
http {
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=30r/s;
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;

    server {
        location /api/ {
            # Apply rate limit
            # burst=50: allow up to 50 queued requests above the rate
            # nodelay: process bursts immediately rather than delaying
            limit_req zone=api_limit burst=50 nodelay;
            limit_req_status 429;

            # Pass to upstream after rate check
            proxy_pass http://backend;
        }

        location /auth/login {
            # Stricter limit for login endpoint (5/minute)
            limit_req zone=login_limit burst=3 nodelay;
            limit_req_status 429;
            proxy_pass http://auth_backend;
        }
    }
}

nodelay is important for DDoS scenarios. Without it, Nginx queues requests above the burst limit, which consumes memory and can itself become a denial-of-service vector. With nodelay, requests above the burst limit are immediately rejected with a 429.

Rate limiting with whitelisting (no rate limit for trusted IPs)

http {
    geo $limit {
        default 1;            # Apply rate limiting by default
        10.0.0.0/8 0;         # Internal network: no limit
        192.168.1.100 0;      # Specific trusted IP: no limit
    }

    map $limit $limit_key {
        0 "";                             # Empty key = no rate limiting
        1 $binary_remote_addr;            # Use IP as key for rate limiting
    }

    limit_req_zone $limit_key zone=protected:10m rate=20r/s;

    server {
        location / {
            limit_req zone=protected burst=30 nodelay;
        }
    }
}

Connection Limiting with ngx_http_limit_conn_module

Connection limits restrict how many simultaneous connections a single IP can hold open. This directly counters slowloris attacks, which work by holding many connections open simultaneously with incomplete HTTP requests.

http {
    # Track connections per IP
    limit_conn_zone $binary_remote_addr zone=perip:10m;

    # Track connections per server (total concurrent connections)
    limit_conn_zone $server_name zone=perserver:10m;

    server {
        # Max 20 concurrent connections per IP
        limit_conn perip 20;

        # Max 2000 total concurrent connections to this server
        limit_conn perserver 2000;

        # Return 503 when limit is exceeded
        limit_conn_status 503;

        # Timeout settings that close slow/hanging connections
        # Critical for slowloris mitigation
        client_body_timeout 10s;
        client_header_timeout 10s;
        keepalive_timeout 15s;
        send_timeout 10s;
    }
}

Slowloris-specific mitigation

server {
    # Reduce how long Nginx waits for a complete request header
    # Default is 60s - this gives attackers 60s to hold a connection
    client_header_timeout 5s;

    # How long to wait for a complete request body
    client_body_timeout 5s;

    # Set a maximum number of keepalive requests per connection
    keepalive_requests 100;

    # Reduce keepalive timeout significantly (default 75s)
    keepalive_timeout 5s;

    # Limit request size to prevent large-body slowloris variants
    client_max_body_size 10m;
    client_body_buffer_size 128k;
}

Blocking by Geographic Region or IP Range

# Using the GeoIP2 module (requires libmaxminddb and ngx_http_geoip2_module)
# Load in main context:
# load_module modules/ngx_http_geoip2_module.so;

http {
    geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
        auto_reload 5m;
        $geoip2_country_code country iso_code;
    }

    # Block specific countries
    map $geoip2_country_code $block_country {
        default  0;
        CN       1;
        RU       1;
        KP       1;
    }

    server {
        if ($block_country) {
            return 444;  # 444 = close connection without response
        }
    }
}

# Simpler approach: deny specific IP ranges in nginx
server {
    # Block known malicious networks or attack source ranges
    deny 203.0.113.0/24;
    deny 198.51.100.0/24;
    allow all;
}

User Agent and Header Filtering

server {
    # Block requests with empty User-Agent (common in simple flood tools)
    if ($http_user_agent = "") {
        return 444;
    }

    # Block known bad user agents used by flood tools
    if ($http_user_agent ~* "masscan|nmap|zgrab|dirbuster|nikto|sqlmap") {
        return 444;
    }

    # Block requests without Host header (invalid HTTP/1.1)
    if ($http_host = "") {
        return 444;
    }

    # Block requests without Accept header (common in basic HTTP flood tools)
    if ($http_accept = "") {
        return 400;
    }
}

Real-Time Monitoring During an Attack

During an active HTTP flood, you need to see what is happening in real time. These commands give you immediate visibility:

# Watch real-time access log for flood patterns
tail -f /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c | sort -rn | head -20

# Count requests per second by IP (watch command, refresh every 2 seconds)
watch -n 2 'awk -v time=$(date -d "2 seconds ago" "+%d/%b/%Y:%H:%M:%S") \
  "$4 >= \"[\"time\"\"" /var/log/nginx/access.log \
  | awk "{print \$1}" | sort | uniq -c | sort -rn | head -20'

# Check current 4xx/5xx error rate (indicator of rate limiting working)
tail -n 10000 /var/log/nginx/access.log | awk '{print $9}' | sort | uniq -c | sort -rn

# Check nginx connection status (requires stub_status module)
curl -s http://127.0.0.1/nginx_status

# Block the top attacking IP immediately (from what you see above)
iptables -I INPUT -s  -j DROP

The Two-Layer Approach: Nginx + Network Detection

Nginx's rate limiting handles Layer 7 HTTP floods. Network-level DDoS attacks (SYN floods, UDP floods, ICMP floods, volumetric attacks) operate below the HTTP layer and are completely invisible to Nginx. A sophisticated attacker may combine both: a small UDP flood to stress your network stack while an HTTP flood exhausts your application tier.

Flowtriq monitors your server's network interface at the packet level, detecting network-layer attacks independently of Nginx. When Flowtriq detects a UDP flood, it applies iptables rules automatically within 1-2 seconds, before the flood can impact Nginx's ability to process HTTP connections. Nginx's rate limiting then handles the HTTP layer independently.

The combined configuration:

  • Nginx handles: HTTP request rates, connection limits, slowloris, bad user agents, geographic blocking, suspicious headers
  • Flowtriq handles: UDP floods, SYN floods, ICMP floods, volumetric attacks, multi-vector attacks, automatic iptables rule injection, BGP webhook triggers for upstream mitigation
  • sysctl handles: SYN cookie protection, kernel-level TCP hardening, buffer sizing

Each layer defends against attacks the other layers cannot see. An HTTP flood that generates valid TCP connections and valid HTTP headers bypasses iptables rules and sysctl settings but hits Nginx's rate limiting. A UDP flood that never generates an HTTP connection is invisible to Nginx but caught by Flowtriq within 1-2 seconds.

Frequently Asked Questions

What is the best rate limit for Nginx to stop DDoS?

There is no universal answer: the right rate limit depends on your application's normal traffic patterns. A login endpoint that handles 10 genuine requests per minute from any given user should be limited to 20-30 requests per minute with a small burst. An API endpoint that legitimate users hit 100 times per second might need a limit of 200-300/second to accommodate bursts without blocking legitimate traffic. The key principle: measure your normal traffic patterns first, set limits at 2-3x the 99th percentile of legitimate traffic, and use $limit_req_status 429 to monitor how often limits are triggering before an attack occurs. Limits set too aggressively will block legitimate users; limits set too loosely will not stop determined attackers.

Does Nginx stop slowloris attacks?

Yes, when properly configured. Slowloris works by holding many TCP connections open with incomplete HTTP request headers, exhausting Nginx's connection capacity. The key settings are: client_header_timeout (how long to wait for a complete header, default 60s - reduce to 5-10s), limit_conn (maximum connections per IP - set to 20-50 for most applications), and keepalive_timeout (how long to maintain idle keepalive connections - reduce from default 75s to 5-15s). With these settings, a slowloris attack's connections will timeout before they can accumulate enough to exhaust capacity.

Should I use return 444 or return 429 for rate-limited requests?

Use return 429 (Too Many Requests) for standard rate limiting: it is the correct HTTP status code and tells clients to back off. Use return 444 (connection close without response) for clearly malicious clients: empty User-Agents, known attack tools, explicitly blocked IPs, or requests that should never come from a legitimate browser. The 444 response sends no HTTP response body and closes the TCP connection immediately, which is slightly more efficient and gives malicious clients less information about your server. For flood scenarios where you want to minimize server work, 444 is preferred. For API rate limiting where you want clients to retry with backoff, 429 with a Retry-After header is appropriate.

Can Nginx stop a DDoS attack by itself?

For HTTP-layer attacks with manageable request rates, yes. Nginx's rate limiting, connection limits, and timeout controls effectively mitigate slowloris attacks, HTTP floods, and application-layer abuse. For network-layer attacks (SYN floods, UDP floods, volumetric attacks) or HTTP floods that generate millions of connections per second, Nginx cannot handle the volume alone: it will exhaust its worker processes or run out of file descriptors before rate limiting can help. Nginx is the right L7 tool but needs to be paired with kernel-level protections (sysctl, iptables/nftables) and per-server detection for full coverage.

Network-layer detection for what Nginx cannot see

Flowtriq monitors your server at the packet level, catching UDP floods, SYN floods, and volumetric attacks before they impact your Nginx processes. $9.99/node/month with a 7-day free trial.

Start your free trial →
Back to Blog

Related Articles