Nginx (engine x) is a powerful, high-performance web server and reverse proxy server that has gained immense popularity due to its speed, scalability, and flexibility. It is widely used for serving static content, load balancing, and handling high traffic websites.
In this complete guide and cheatsheet, I will try to cover everything we need to know to get started with Nginx, from installing the latest version to configuration, security, and performance optimization.
At its core, Nginx uses an event-driven, asynchronous architecture that can handle thousands of concurrent connections efficiently. Here's a simplified view of how Nginx fits into a typical web architecture:
Loading diagram...
Nginx can serve static files directly, proxy requests to application servers, load balance across multiple backends, and much more.
Before we begin, make sure you have:
A Linux server running Ubuntu 20.04+ or Debian 11+ (other distros work but commands may differ)
or (other distros work but commands may differ) Root or sudo access to the server
to the server A domain name (optional, but required for SSL certificate sections)
(optional, but required for SSL certificate sections) Basic familiarity with the command line
Let's start by installing nginx on our server. The installation process may vary depending on our operating system. I will cover the installation process for Ubuntu and Debian-based systems.
For other operating systems, please refer to the official Nginx installation guide.
The version of Nginx available in the default package repositories of Ubuntu and Debian may not be the latest. To install the latest stable version, we can use the official Nginx repository.
For that, we need to install some prerequisite packages first by running the following command:
Code Copy sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring sudo apt install curl gnupg2 ca-certificates lsb-release debian-archive-keyring
Next, we need to import the Nginx signing key by running this command:
Code Copy curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg
Now we can add the Nginx repository to our system's package sources by running this command chain:
Code Copy echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/$( lsb_release -is | tr '[:upper:]' '[:lower:]') $( lsb_release -cs ) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
For security we should verify the fingerprint of the Nginx signing key by running the following command before installing Nginx from this source.
Code Copy gpg --dry-run --quiet --no-keyring --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg
The output should show a fingerprint that looks like this:
Code Copy pub rsa2048 2011-08-19 [SC] [expires: 2027-05-24] 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 uid nginx signing key <signing-key@nginx.com>
Match the fingerprint 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 with the one provided on the Nginx official website.
If the fingerprints match, it is safe to install Nginx from this repository.
So, now let's update our package list to include the Nginx repository by running:
Code Copy sudo apt update
And finally, we can install Nginx by running:
Code Copy sudo apt install nginx
Now we should have the latest stable version of Nginx installed on our system.
We can verify the installation by checking the Nginx version with:
Code Copy nginx -v
The output should show the installed version of Nginx, for example:
Code Copy nginx version: nginx/1.29.1
To see the full version information including the modules compiled with Nginx, we can run:
Code Copy nginx -V
And this should give us a detailed output similar to this:
Code Copy nginx version: nginx/1.29.1 built by gcc 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) built with OpenSSL 3.0.13 30 Jan 2024 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/run/nginx.pid --lock-path=/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_v3_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -ffile-prefix-map=/home/builder/debuild/nginx-1.29.1/debian/debuild-base/nginx-1.29.1=. -flto=auto -ffat-lto-objects -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fdebug-prefix-map=/home/builder/debuild/nginx-1.29.1/debian/debuild-base/nginx-1.29.1=/usr/src/nginx-1.29.1-1~noble -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -flto=auto -ffat-lto-objects -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
Here, we can see we have the HTTP/3 module ( --with-http_v3_module ) enabled, which we will use later in this guide.
Now that we have Nginx installed, we need to start the Nginx service and enable it to start on boot.
Code Copy sudo systemctl start nginx sudo systemctl enable nginx
We can check the status of the Nginx service by running:
Code Copy sudo systemctl status nginx
The output should show that the Nginx service is active and running.
Hit Q to exit the status output.
For security we should use a firewall to restrict unnecessary access to our server.
Let's configure UFW (Uncomplicated Firewall) to allow HTTP, HTTPS, and HTTP/3 traffic to our server.
Code Copy sudo ufw enable sudo ufw allow 'Nginx Full'
Now for HTTP/3 (QUIC), we need to allow UDP traffic on port 443, as HTTP/3 uses UDP instead of TCP.
Code Copy sudo ufw allow 443/udp
We can check the status of UFW to ensure the rules are applied correctly by running:
Code Copy sudo ufw status
The output should show that the firewall is active and the rules for Nginx are applied.
Code Copy Status: active To Action From -- ------ ---- Nginx Full ALLOW Anywhere Nginx Full (v6) ALLOW Anywhere (v6) 443/udp ALLOW Anywhere 443/udp (v6) ALLOW Anywhere (v6) ... other rules ...
Great! Now our server is ready to serve web traffic securely! You can test it by opening your server's IP address in a web browser. You should see the default Nginx welcome page.
Before we dive deeper into Nginx configurations, here are some useful commands that we will frequently use with Nginx.
Command Description sudo systemctl start nginx Start the Nginx service sudo systemctl stop nginx Stop the Nginx service sudo systemctl restart nginx Restart the Nginx service sudo systemctl reload nginx Reload Nginx configuration without downtime sudo systemctl status nginx Check the status of the Nginx service sudo nginx -t Test Nginx configuration for syntax errors sudo nginx -s reload Reload Nginx configuration sudo nginx -s stop Stop Nginx gracefully sudo nginx -s quit Quit Nginx gracefully sudo nginx -v Display Nginx version sudo nginx -V Display Nginx version and compile options sudo tail -f /var/log/nginx/access.log View Nginx access log in real-time sudo tail -f /var/log/nginx/error.log View Nginx error log in real-time
If you don't have systemctl , you can use service command instead. For example, sudo service nginx start .
Here are some important Nginx related files and directories that we should be aware of:
File/Directory Description /etc/nginx/nginx.conf Main Nginx configuration file /etc/nginx/conf.d/ Directory for additional configuration files /etc/nginx/sites-available/ Directory for available server block files /etc/nginx/sites-enabled/ Directory for enabled server block files /var/www/html/ Default web root directory /var/log/nginx/access.log Nginx access log file /var/log/nginx/error.log Nginx error log file
These files and directories are crucial for managing and configuring Nginx effectively.
Now that we have Nginx installed and running, and we are familiar with some useful commands and important files, let's dive into configuring Nginx.
When we install Nginx from the official repository, it comes with a very basic default configuration file located at /etc/nginx/nginx.conf .
Now we will replace it with a more optimized and secured one.
Let's first create the directories we need:
Code Copy sudo mkdir -p /etc/nginx/sites-available sudo mkdir -p /etc/nginx/sites-enabled
Next, we will create a backup of the original nginx.conf file and then create a new one:
Code Copy sudo mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
Code Copy sudo nano /etc/nginx/nginx.conf
Now, let's add the following configuration to the new nginx.conf file:
Note We use www-data as the user because it's the standard web server user on Ubuntu/Debian. If you installed Nginx from the official repository, it may default to the nginx user. You can check which user exists on your system with id www-data or id nginx and use the appropriate one.
Code Copy user www-data; worker_processes auto; worker_cpu_affinity auto; worker_rlimit_nofile 65536 ; # Limit the number of open files pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 1024 ; # Increase based on available resources multi_accept on ; use epoll ; accept_mutex off ; } http { # Basic Settings sendfile on ; sendfile_max_chunk 1m ; tcp_nopush on ; tcp_nodelay on ; keepalive_timeout 60s ; keepalive_requests 1000 ; types_hash_max_size 2048 ; server_tokens off ; # Server Names server_names_hash_bucket_size 128 ; server_names_hash_max_size 8192 ; # MIME Types include /etc/nginx/mime.types; default_type application/octet-stream; # Buffer Settings client_body_buffer_size 128k ; client_max_body_size 10m ; client_header_buffer_size 1k ; client_header_timeout 30s ; large_client_header_buffers 4 4k ; output_buffers 1 32k ; postpone_output 1460 ; # SSL/TLS Settings (Optimized for modern browsers) ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers off ; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256; ssl_ecdh_curve X25519:prime256v1:secp384r1; ssl_session_timeout 1d ; ssl_session_cache shared:TLS:10m; # Increase based on available resources ssl_session_tickets on ; # Generate this file with: openssl rand 80 > /etc/nginx/ssl_session_ticket.key ssl_session_ticket_key /etc/nginx/ssl_session_ticket.key; ssl_stapling on ; ssl_stapling_verify on ; ssl_early_data on ; ssl_buffer_size 4k ; # DNS Resolver for OCSP Stapling resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s ; # Security Headers (Global defaults) add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; add_header X-Content-Type-Options nosniff always; add_header Referrer-Policy "strict-origin-when-cross-origin" always; add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always; # Logging (Optimized with HTTP/3 support) log_format main '$ remote_addr - $ remote_user [$ time_local ] "$ request " ' '$ status $ body_bytes_sent "$ http_referer " ' '"$ http_user_agent " rt=$ request_time ' 'ups_rt=$ upstream_response_time ups_addr=$ upstream_addr ' '"$ http_x_forwarded_for "' ; log_format quic '$ remote_addr - $ remote_user [$ time_local ] "$ request " ' '$ status $ body_bytes_sent "$ http_referer " ' '"$ http_user_agent " "$ http3 "' ; access_log /var/log/nginx/access.log main buffer=32k flush=5s; error_log /var/log/nginx/error.log warn ; # Compression (Enhanced) gzip on ; gzip_vary on ; gzip_proxied any; gzip_comp_level 6 ; gzip_min_length 1000 ; gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/atom+xml image/svg+xml font/truetype font/opentype application/vnd.ms-fontobject application/x-font-ttf application/x-font-opentype application/x-font-truetype image/x-icon application/vnd.font-fontforge-sfd; # Brotli Compression (uncomment if module available) # brotli on; # brotli_comp_level 6; # brotli_static on; # brotli_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; # File Cache (Enhanced) open_file_cache max=2000 inactive=30s; open_file_cache_valid 30s ; open_file_cache_min_uses 2 ; open_file_cache_errors on ; # Rate Limiting Zones limit_req_zone $binary_remote_addr zone=api:1m rate=10r/s; limit_req_zone $binary_remote_addr zone=general:1m rate=5r/s; limit_req_status 429 ; # Connection Limiting limit_conn_zone $binary_remote_addr zone=addr:1m; limit_conn_status 429 ; # Map for WebSocket upgrade map $ http_upgrade $connection_upgrade { default upgrade; '' close; } # Cache Configurations proxy_cache_path /var/cache/nginx/api levels=1:2 keys_zone=api_cache:2m inactive=60m use_temp_path=off max_size=100m; proxy_cache_path /var/cache/nginx/static levels=1:2 keys_zone=static_cache:10m inactive=1y use_temp_path=off max_size=1g; # Include Configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }
This configuration includes several optimizations and security enhancements, such as:
Improved SSL/TLS settings for better security and performance.
Enhanced logging formats to include more detailed request information.
Compression settings for faster content delivery.
File caching to reduce disk I/O and improve response times.
Rate limiting to protect against abuse and ensure fair usage.
Connection limiting to prevent resource exhaustion.
WebSocket support for real-time applications.
Customizable cache paths for different content types.
After adding the configuration, save and exit the file (by pressing CTRL + X , then Y , and ENTER in nano).
Now let's generate the SSL session ticket key file:
Code Copy sudo openssl rand -out /etc/nginx/ssl_session_ticket.key 80 sudo chmod 600 /etc/nginx/ssl_session_ticket.key
This key is used for encrypting session tickets, which helps improve the performance of SSL/TLS connections.
For enhanced security, rotate the SSL session ticket key periodically (every 48-72 hours). You can set up a cron job to automate this: 0 */48 * * * openssl rand -out /etc/nginx/ssl_session_ticket.key 80 && systemctl reload nginx
Now, let's test the Nginx configuration for any syntax errors by running:
Code Copy sudo nginx -t
If the output shows syntax is ok and test is successful , we can proceed to reload Nginx to apply the new configuration:
Code Copy sudo systemctl reload nginx
Now Nginx is configured with a more optimized and secure setup.
This configuration is optimized for low powered (1 CPU, 1-2GB RAM) VMs, if you have more powerful system, you can increase some of the values to get the most out of your machine.
By default, Nginx does not come with the Brotli compression module enabled. If we want to use Brotli compression, we need to compile Nginx from source with the Brotli module or use a pre-built package that includes it.
To verify that gzip compression is working correctly, we can use curl with the Accept-Encoding header:
Code Copy curl -H "Accept-Encoding: gzip" -I https://example.com
If gzip is enabled, we should see Content-Encoding: gzip in the response headers:
Code Copy HTTP/2 200 content-type: text/html; charset=utf-8 content-encoding: gzip ...
We can also compare the compressed vs uncompressed size:
Code Copy # Compressed response curl -H "Accept-Encoding: gzip" -so /dev/null -w '%{size_download}
' https://example.com # Uncompressed response curl -so /dev/null -w '%{size_download}
' https://example.com
Now that we have Nginx installed and configured, let's explore how to use it effectively for different use cases.
One of the most common use cases for Nginx is hosting static websites. Let's create a simple server block to serve a static website.
First, let's create a directory for our website files:
Code Copy sudo mkdir -p /var/www/example.com/html sudo chown -R $USER : $USER /var/www/example.com/html
Now, let's create a simple index.html file to test our setup:
Code Copy nano /var/www/example.com/html/index.html
Add some basic HTML content:
Code Copy <! DOCTYPE html > < html lang = "en" > < head > < meta charset = "UTF-8" > < meta name = "viewport" content = "width=device-width, initial-scale=1.0" > < title >Welcome to Example.com</ title > </ head > < body > < h1 >Success! Nginx is serving your website.</ h1 > </ body > </ html >
Save and exit the file.
Next, let's create a server block configuration for our website:
Code Copy sudo nano /etc/nginx/sites-available/example.com
Add the following server block configuration:
Code Copy server { listen 80 ; listen [::]:80; server_name example.com www.example.com; root /var/www/example.com/html; index index.html index.htm; # Logging access_log /var/log/nginx/example.com.access.log main ; error_log /var/log/nginx/example.com.error.log warn ; # Security headers (inherited from main config, but can override here) add_header X-Frame-Options "SAMEORIGIN" always; # Static file caching location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|eot)$ { expires 1y; add_header Cache-Control "public, immutable" ; access_log off ; } # Main location block location / { try_files $uri $uri/ =404 ; } # Deny access to hidden files location ~ /\. { deny all ; access_log off ; log_not_found off ; } }
Now, let's enable the site by creating a symbolic link to the sites-enabled directory:
Code Copy sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
Test the Nginx configuration and reload:
Code Copy sudo nginx -t sudo systemctl reload nginx
If everything is set up correctly, we should now be able to access our website at http://example.com (assuming DNS is configured properly).
Replace example.com with your actual domain name throughout this guide.
For production websites, we should always use HTTPS to secure our traffic. Let's Encrypt provides free SSL/TLS certificates that are easy to obtain and renew automatically.
First, let's install Certbot and the Nginx plugin:
Code Copy sudo apt install certbot python3-certbot-nginx
Now we can obtain an SSL certificate for our domain:
Code Copy sudo certbot --nginx -d example.com -d www.example.com
Certbot will prompt us to enter an email address for renewal notifications and agree to their terms of service. It will then automatically configure Nginx to use the new certificate.
After Certbot finishes, our server block will be automatically updated to include SSL configuration. However, let's enhance it with a more optimized configuration that includes HTTP/2 and HTTP/3 support.
Let's update our server block:
Code Copy sudo nano /etc/nginx/sites-available/example.com
Replace the content with this enhanced configuration:
Code Copy # Redirect HTTP to HTTPS server { listen 80 ; listen [::]:80; server_name example.com www.example.com; return 301 https://$host$request_uri; } # HTTPS server block server { listen 443 ssl; listen [::]:443 ssl; # HTTP/2 support http2 on ; # HTTP/3 (QUIC) support listen 443 quic reuseport; listen [::]:443 quic reuseport; server_name example.com www.example.com; root /var/www/example.com/html; index index.html index.htm; # SSL certificates (managed by Certbot) ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem; # HTTP/3 header add_header Alt-Svc 'h3=":443"; ma=86400' always; # Logging with HTTP/3 format access_log /var/log/nginx/example.com.access.log quic; error_log /var/log/nginx/example.com.error.log warn ; # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; add_header X-Content-Type-Options "nosniff" always; # Static file caching location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|eot)$ { expires 1y; add_header Cache-Control "public, immutable" ; access_log off ; } # Main location block location / { try_files $uri $uri/ =404 ; } # Deny access to hidden files location ~ /\. { deny all ; access_log off ; log_not_found off ; } }
The reuseport directive for QUIC should only be used once across all server blocks listening on the same port. If we have multiple server blocks, only add reuseport to the first one.
Test and reload Nginx:
Code Copy sudo nginx -t sudo systemctl reload nginx
Certbot automatically sets up a cron job or systemd timer to renew certificates before they expire. We can test the renewal process with:
Code Copy sudo certbot renew --dry-run
If the dry run is successful, our certificates will be renewed automatically.
Nginx excels as a reverse proxy, forwarding client requests to backend servers. This is useful for running Node.js, Python, or other application servers behind Nginx.
Let's create a reverse proxy configuration for an application running on port 3000:
Code Copy sudo nano /etc/nginx/sites-available/app.example.com
Add the following configuration:
Code Copy # Upstream configuration upstream app_backend { server 127.0.0.1:3000; keepalive 32 ; } # Redirect HTTP to HTTPS server { listen 80 ; listen [::]:80; server_name app.example.com; return 301 https://$host$request_uri; } # HTTPS server block server { listen 443 ssl; listen [::]:443 ssl; http2 on ; listen 443 quic; listen [::]:443 quic; server_name app.example.com; # SSL certificates ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/app.example.com/chain.pem; # HTTP/3 header add_header Alt-Svc 'h3=":443"; ma=86400' always; # Logging access_log /var/log/nginx/app.example.com.access.log main ; error_log /var/log/nginx/app.example.com.error.log warn ; # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; # Proxy settings location / { proxy_pass http://app_backend; proxy_http_version 1.1 ; # Headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; # WebSocket support proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; # Timeouts proxy_connect_timeout 60s ; proxy_send_timeout 60s ; proxy_read_timeout 60s ; # Buffering proxy_buffering on ; proxy_buffer_size 4k ; proxy_buffers 8 4k ; proxy_busy_buffers_size 8k ; } # API rate limiting example location /api/ { limit_req zone=api burst=20 nodelay; limit_conn addr 10 ; proxy_pass http://app_backend; proxy_http_version 1.1 ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } # Static files served directly by Nginx location /static/ { alias /var/www/app.example.com/static/; expires 1y; add_header Cache-Control "public, immutable" ; access_log off ; } }
Enable the site and reload Nginx:
Code Copy sudo ln -s /etc/nginx/sites-available/app.example.com /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx
This configuration includes:
Upstream definition with keepalive connections for better performance
Full proxy header forwarding for proper application awareness
WebSocket support for real-time features
Rate limiting on API endpoints
Direct static file serving to offload the application server
Nginx can distribute traffic across multiple backend servers for high availability and improved performance.
Here's an example load balancing configuration:
Code Copy # Load balanced upstream with health checks upstream app_cluster { # Load balancing methods: # - round_robin (default): Distributes requests evenly # - least_conn: Sends to server with fewest active connections # - ip_hash: Routes based on client IP (sticky sessions) # - hash: Routes based on a custom key least_conn ; server 192.168.1.10:3000 weight = 3 ; server 192.168.1.11:3000 weight = 2 ; server 192.168.1.12:3000 weight = 1 backup; keepalive 64 ; } server { listen 443 ssl; listen [::]:443 ssl; http2 on ; server_name app.example.com; ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem; location / { proxy_pass http://app_cluster; proxy_http_version 1.1 ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Connection "" ; # Health check failures proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_next_upstream_timeout 10s ; proxy_next_upstream_tries 3 ; } }
The configuration above uses:
least_conn : Routes traffic to the server with the fewest active connections
: Routes traffic to the server with the fewest active connections weight : Servers with higher weights receive proportionally more traffic
: Servers with higher weights receive proportionally more traffic backup : Server is only used when primary servers are unavailable
: Server is only used when primary servers are unavailable proxy_next_upstream: Automatically retries failed requests on other servers
Nginx can cache responses from upstream servers to improve performance and reduce backend load.
Here's how to set up proxy caching:
Code Copy server { listen 443 ssl; listen [::]:443 ssl; http2 on ; server_name app.example.com; ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem; # API caching location /api/ { proxy_pass http://app_backend; proxy_http_version 1.1 ; # Enable caching proxy_cache api_cache; proxy_cache_valid 200 10m ; proxy_cache_valid 404 1m ; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_background_update on ; proxy_cache_lock on ; # Add cache status header for debugging add_header X-Cache-Status $upstream_cache_status; # Don't cache requests with these headers proxy_cache_bypass $http_cache_control $http_pragma; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Static file caching location /static/ { proxy_pass http://app_backend; proxy_cache static_cache; proxy_cache_valid 200 1y; add_header X-Cache-Status $upstream_cache_status; } }
The cache zones ( api_cache and static_cache ) were already defined in our main nginx.conf file earlier.
The X-Cache-Status header helps us debug caching behavior. Values include HIT , MISS , BYPASS , EXPIRED , and STALE .
If we're running PHP applications like WordPress or Laravel, we need to configure Nginx to work with PHP-FPM.
First, let's install PHP-FPM:
Code Copy sudo apt install php-fpm php-mysql php-curl php-gd php-mbstring php-xml php-zip
Check which PHP-FPM socket is available:
Code Copy ls /run/php/
The output should show something like php8.3-fpm.sock (version may vary).
Now let's create a server block for a PHP application:
Code Copy sudo nano /etc/nginx/sites-available/php-app.example.com
Add the following configuration:
Code Copy server { listen 443 ssl; listen [::]:443 ssl; http2 on ; server_name php-app.example.com; root /var/www/php-app.example.com/public; index index.php index.html; # SSL certificates ssl_certificate /etc/letsencrypt/live/php-app.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/php-app.example.com/privkey.pem; # Logging access_log /var/log/nginx/php-app.example.com.access.log main ; error_log /var/log/nginx/php-app.example.com.error.log warn ; # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; # Main location location / { try_files $uri $uri/ /index.php?$query_string; } # PHP processing location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$ ; fastcgi_pass unix:/run/php/php8.3-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; # PHP-FPM optimizations fastcgi_buffer_size 128k ; fastcgi_buffers 4 256k ; fastcgi_busy_buffers_size 256k ; fastcgi_read_timeout 300 ; # Hide PHP version fastcgi_hide_header X-Powered-By; } # Deny access to .htaccess location ~ /\.ht { deny all ; } # Static file caching location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|eot)$ { expires 1y; add_header Cache-Control "public, immutable" ; access_log off ; } }
Remember to replace php8.3-fpm.sock with your actual PHP-FPM socket version.
For WordPress sites specifically, we can add these additional location blocks for better security and performance:
Code Copy # WordPress specific rules location = /wp-login.php { limit_req zone=general burst=5 nodelay; fastcgi_pass unix:/run/php/php8.3-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # Disable PHP execution in uploads location ~* /wp-content/uploads/.*\.php$ { deny all ; } # Protect wp-config.php location = /wp-config.php { deny all ; } # Block access to sensitive files location ~* /(wp-config\.php|readme\.html|license\.txt) { deny all ; }
URL redirects are essential for SEO, domain consolidation, and maintaining link integrity. Here are some common redirect patterns.
Code Copy # Redirect www to non-www server { listen 80 ; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; server_name www.example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; return 301 https://example.com$request_uri; } # Redirect non-www to www server { listen 80 ; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; server_name example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; return 301 https://www.example.com$request_uri; }
Code Copy # Remove trailing slashes (except for directories) location ~ ^(.+)/$ { return 301 $scheme://$host$1; } # Add trailing slashes rewrite ^([^.]*[^/])$ $1/ permanent ;
Code Copy # Single URL redirect location = /old-page { return 301 /new-page; } # Redirect with regex location ~ ^/blog/(\d {4})/(\d{2})/(.*)$ { return 301 /posts/$1-$2-$3; } # Redirect entire directory location /old-section/ { return 301 /new-section/; } # Map-based redirects for multiple URLs map $ uri $new_uri { /old-url-1 /new-url-1; /old-url-2 /new-url-2; /old-url-3 /new-url-3; default "" ; } server { # ... other config ... if ($new_uri != "" ) { return 301 $new_uri; } }
Code Copy server { listen 80 default_server ; listen [::]:80 default_server ; server_name _; return 301 https://$host$request_uri; }
Use return 301 for permanent redirects (cached by browsers) and return 302 for temporary redirects.
After enabling HTTP/3 (QUIC), we should verify that it's working correctly.
Modern versions of curl support HTTP/3. We can test with:
Code Copy curl -I --http3 https://example.com
If HTTP/3 is working, we should see HTTP/3 200 in the response.
If curl doesn't support --http3 , we may need to install a newer version or build it with HTTP/3 support.
Open the website in Chrome or Firefox Open DevTools (F12 or right-click → Inspect) Go to the Network tab Reload the page Right-click on the column headers and enable Protocol Look for h3 or http/3 in the Protocol column
We can verify the Alt-Svc header is being sent correctly:
Code Copy curl -I https://example.com | grep -i alt-svc
The output should include something like:
Code Copy alt-svc: h3=":443"; ma=86400
This tells browsers that HTTP/3 is available on port 443.
We can also use online tools to verify HTTP/3 support:
Keep in mind that some networks and firewalls block UDP traffic, which can prevent HTTP/3 from working for certain users. HTTP/2 will be used as a fallback.
Browser Compatibility: HTTP/3 is supported in Chrome 87+, Firefox 88+, Edge 87+, and Safari 14+. Older browsers will automatically fall back to HTTP/2 or HTTP/1.1, so there's no downside to enabling HTTP/3.
Security is paramount when exposing services to the internet. Here are some additional security measures we can implement.
Add these location blocks to protect against common attack patterns:
Code Copy # Block access to sensitive files location ~* \.(git|env|htaccess|htpasswd|ini|log|sh|sql|bak|swp)$ { deny all ; return 404 ; } # Block PHP execution in uploads directory location ~* /uploads/.*\.php$ { deny all ; return 404 ; } # Block access to WordPress xmlrpc.php (if applicable) location = /xmlrpc.php { deny all ; return 404 ; } # Block common vulnerability scanners location ~* (eval\(|base64_|php://input) { deny all ; return 444 ; }
Restrict access to admin areas or sensitive endpoints:
Code Copy location /admin/ { # Allow only specific IPs allow 192.168.1.0/24; allow 10.0.0.0/8; deny all ; proxy_pass http://app_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
Add password protection to sensitive areas:
Code Copy sudo apt install apache2-utils sudo htpasswd -c /etc/nginx/.htpasswd admin
Then add the authentication to the server block:
Code Copy location /admin/ { auth_basic "Restricted Area" ; auth_basic_user_file /etc/nginx/.htpasswd; proxy_pass http://app_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }
Combine rate limiting with connection limiting for basic DDoS protection:
Code Copy server { # ... other config ... # Limit connections per IP limit_conn addr 100 ; # Limit request rate limit_req zone=general burst=50 nodelay; location /api/ { # Stricter limits for API endpoints limit_req zone=api burst=20 nodelay; limit_conn addr 20 ; proxy_pass http://app_backend; } }
For serious DDoS protection, consider using a CDN like Cloudflare or a dedicated WAF (Web Application Firewall).
Monitoring is essential for understanding server performance and identifying issues before they become critical.
Nginx comes with a built-in stub_status module that provides basic metrics. Add a status endpoint to your server configuration:
Code Copy # Add this inside a server block (or create a dedicated internal server) location /nginx_status { stub_status on; # Restrict access to localhost and trusted IPs only allow 127.0.0.1 ; allow ::1; allow 10.0.0.0/8; deny all ; }
Access the endpoint to see metrics:
Code Copy curl http://localhost/nginx_status
The output shows:
Code Copy Active connections: 42 server accepts handled requests 12345 12345 67890 Reading: 0 Writing: 3 Waiting: 39
Metric Description Active connections Current active client connections accepts Total accepted connections handled Total handled connections (should equal accepts) requests Total client requests Reading Connections reading request header Writing Connections sending response Waiting Keep-alive connections waiting for requests
For more advanced monitoring, consider using Prometheus with the nginx-prometheus-exporter or integrating with tools like Grafana, Datadog, or New Relic.
Regular log analysis helps identify issues and optimize performance.
Code Copy # Count requests by status code awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn # Find top 10 requested URLs awk '{print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10 # Find top 10 client IPs awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10 # Find slow requests (request time > 1s) awk '$NF > 1 {print $0}' /var/log/nginx/access.log # Count requests per hour awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1,2 | uniq -c
Nginx logs can grow quickly. Ensure logrotate is configured:
Code Copy cat /etc/logrotate.d/nginx
A typical configuration looks like:
Code Copy /var/log/nginx/*.log { daily missingok rotate 14 compress delaycompress notifempty create 0640 www-data adm sharedscripts postrotate [ -f /run/nginx.pid ] && kill -USR1 $(cat /run/nginx.pid) endscript }
Here are some common Nginx issues and how to resolve them.
Always test configuration before reloading:
Code Copy sudo nginx -t
If there's an error, the output will show the file and line number where the issue was found.
Check that Nginx has proper permissions to read files:
Code Copy # Check file permissions ls -la /var/www/example.com/html/ # Fix ownership if needed sudo chown -R www-data:www-data /var/www/example.com/html/ # Fix permissions sudo chmod -R 755 /var/www/example.com/html/
This usually means the upstream server is not responding:
Code Copy # Check if the backend is running sudo systemctl status your-app-service # Check if the port is listening sudo ss -tlnp | grep 3000 # Check Nginx error logs sudo tail -f /var/log/nginx/error.log
Increase timeout values for slow backends:
Code Copy location / { proxy_pass http://app_backend; proxy_connect_timeout 300s ; proxy_send_timeout 300s ; proxy_read_timeout 300s ; }
Logs are invaluable for troubleshooting:
Code Copy # View access logs in real-time sudo tail -f /var/log/nginx/access.log # View error logs in real-time sudo tail -f /var/log/nginx/error.log # Search for specific errors sudo grep "error" /var/log/nginx/error.log | tail -50
Here's a handy reference for common Nginx configurations.
Directive Description Example listen Port to listen on listen 80; server_name Domain names server_name example.com; root Document root root /var/www/html; index Default index files index index.html; location URL matching location /api/ { } proxy_pass Proxy to backend proxy_pass http://localhost:3000; try_files File fallback try_files $uri $uri/ =404; return Return response return 301 https://$host$request_uri; rewrite URL rewriting rewrite ^/old$ /new permanent;
Modifier Description Priority = Exact match Highest ^~ Preferential prefix High ~ Case-sensitive regex Medium ~* Case-insensitive regex Medium (none) Prefix match Lowest
Example priority order:
Code Copy location = /exact { } # Matches only /exact location ^~ /static/ { } # Prefix, stops regex search location ~ \.php$ { } # Case-sensitive regex location ~* \.(jpg|png)$ { } # Case-insensitive regex location /prefix/ { } # Standard prefix match location / { } # Default fallback
Variable Description $host Request host header $uri Request URI without query string $request_uri Full original request URI $args Query string $remote_addr Client IP address $scheme Request scheme (http/https) $server_name Server name $server_port Server port $http_* Any HTTP header $upstream_cache_status Cache hit/miss status
Code Copy # Modern SSL configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers off ; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384; # Session caching ssl_session_timeout 1d ; ssl_session_cache shared:TLS:10m; ssl_session_tickets on ; # OCSP Stapling ssl_stapling on ; ssl_stapling_verify on ; resolver 1.1.1.1 8.8.8.8 valid=300s;
Tip Configuration Enable Gzip gzip on; Enable caching proxy_cache zone_name; Use keepalive keepalive_timeout 60s; Buffer responses proxy_buffering on; Use sendfile sendfile on; Enable HTTP/2 http2 on; Enable HTTP/3 listen 443 quic;
Nginx is an incredibly versatile tool that can serve static files, act as a reverse proxy, load balance traffic, and much more. With the configurations and examples in this guide, we should have a solid foundation for deploying and managing Nginx in production environments.
TL;DR - Key Takeaways Install Nginx from the official repository for the latest features (including HTTP/3)
for the latest features (including HTTP/3) Use the optimized nginx.conf provided for security and performance
provided for security and performance Always enable HTTPS with Let's Encrypt for production sites
for production sites Configure rate limiting and security headers to protect your server
and to protect your server Enable HTTP/2 and HTTP/3 for modern performance
for modern performance Set up monitoring with stub_status and log analysis
with stub_status and log analysis Always run nginx -t before reloading configuration
Remember to always:
Test configurations before reloading ( nginx -t )
) Use HTTPS for all production websites
Keep Nginx and SSL certificates up to date
Monitor logs for issues and security threats
Implement rate limiting and other security measures
I hope this guide helps you get the most out of Nginx. Happy serving!