Nginx
High-performance web server, reverse proxy, and load balancer
Overview
Nginx (pronounced "engine-x") is a high-performance HTTP server and reverse proxy. Created by Igor Sysoev (development began in 2002, first public release in October 2004) to solve the C10K problem (handling 10,000+ concurrent connections), it now powers a significant share of the internet's busiest sites.
Event-driven architecture
Nginx uses an event-driven, asynchronous, non-blocking architecture. Instead of spawning a thread or process per connection (like Apache's prefork model), Nginx runs a small number of worker processes, each handling thousands of connections in a single event loop.
Core Master Process
Reads configuration, binds to ports, and manages worker processes. Runs as root (needed to bind port 80/443), then spawns workers that drop privileges.
Core Worker Processes
Each worker runs an event loop using epoll (Linux) or kqueue (BSD/macOS). A single worker can handle thousands of simultaneous connections without blocking. Set worker_processes auto; to match CPU core count.
Strength Why it's fast
No thread-per-connection overhead. No context switching. Connections are multiplexed within the event loop. Static file serving uses sendfile() for zero-copy I/O. Memory usage stays flat under load.
Comparison Nginx vs Apache
Apache's default prefork model uses one process per connection — fine for low traffic but memory-hungry at scale. Apache's event MPM is closer to Nginx's model, but Nginx was built event-driven from the ground up. Nginx excels at static files and reverse proxying; Apache has richer .htaccess and module support.
Static Site Hosting
Nginx is one of the fastest static file servers available. A minimal server block is all you need to serve an HTML site, a single-page application, or a file download directory.
Basic server block
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com/public;
index index.html index.htm;
# Try the exact URI, then as a directory, then fall back to 404
location / {
try_files $uri $uri/ =404;
}
# For single-page apps (React, Vue, etc.) -- route all paths to index.html
# location / {
# try_files $uri $uri/ /index.html;
# }
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
}
Gzip compression
Enable gzip to compress text-based responses. This typically reduces transfer size by 60-80% for HTML, CSS, and JavaScript.
# In the http {} block or server {} block
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 1024;
gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/xml
image/svg+xml;
# Note: Do not include font/woff2 -- WOFF2 is already compressed and gzip adds no benefit
Caching headers
Set cache headers so browsers and CDNs cache static assets. Use long cache times for fingerprinted assets (e.g., app.abc123.js) and short times for HTML.
# Cache static assets aggressively (1 year)
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Don't cache HTML (always fetch fresh)
location ~* \.html$ {
expires -1;
add_header Cache-Control "no-cache, no-store, must-revalidate";
}
Always use try_files instead of if statements for routing. Nginx's if directive is notoriously tricky — it doesn't work like a programming language if and can cause unexpected behavior inside location blocks. The Nginx wiki famously calls it "If is Evil."
The most common production use of Nginx is as a reverse proxy — sitting in front of application servers (Node.js, Python, Go, Java, etc.) and forwarding client requests to them. This offloads TLS termination, static file serving, caching, and load balancing from the application.
Basic proxy_pass
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Upstream blocks and load balancing
Use an upstream block to define a pool of backend servers. Nginx distributes requests across them.
upstream app_backends {
# Round-robin (default) -- each request goes to the next server
server 10.0.1.10:3000;
server 10.0.1.11:3000;
server 10.0.1.12:3000;
}
# Least connections -- send to the server with fewest active connections
upstream app_least {
least_conn;
server 10.0.1.10:3000;
server 10.0.1.11:3000;
}
# IP hash -- same client IP always goes to same backend (sticky sessions)
upstream app_sticky {
ip_hash;
server 10.0.1.10:3000;
server 10.0.1.11:3000;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://app_backends;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Proxy headers
| Header | Purpose |
|---|---|
X-Real-IP | The actual client IP address. Without this, your app sees Nginx's IP (127.0.0.1) as the client. |
X-Forwarded-For | Chain of all proxies the request passed through. Use $proxy_add_x_forwarded_for to append rather than replace. |
X-Forwarded-Proto | Whether the original request was HTTP or HTTPS. Needed for apps that generate URLs or enforce HTTPS redirects. |
Host | The original Host header from the client. Without this, the backend sees the upstream address instead of the domain name. |
WebSocket proxying
WebSocket connections require an HTTP upgrade handshake. Nginx must be explicitly configured to pass the Upgrade and Connection headers, otherwise it strips them and the WebSocket connection fails.
location /ws/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400s; # Keep WebSocket alive for 24h
proxy_send_timeout 86400s;
}
The proxy_http_version 1.1 directive is critical for WebSockets. HTTP/1.0 does not support the Upgrade mechanism. If you see WebSocket connections failing silently behind Nginx, this is almost always the cause.
For locations that handle both regular HTTP and WebSocket traffic, the official Nginx docs recommend using a map block instead of hardcoding Connection "upgrade":
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Then in your location block:
# proxy_set_header Connection $connection_upgrade;
This sends Connection: upgrade for WebSocket requests and Connection: close for regular HTTP requests, which is safer for mixed-traffic locations like location /.
SSL/TLS Termination
Nginx handles TLS encryption so your backend applications don't have to. Clients connect to Nginx over HTTPS; Nginx decrypts the traffic and forwards plain HTTP to the backend. This centralizes certificate management and offloads cryptographic work.
Complete SSL server block
# Redirect all HTTP to HTTPS
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
http2 on;
server_name example.com www.example.com;
# Certificate and key
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Protocols -- TLS 1.2 and 1.3 only (no SSLv3, TLS 1.0, TLS 1.1)
ssl_protocols TLSv1.2 TLSv1.3;
# Ciphers -- strong, modern ciphers only
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
# Session caching for performance
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# OCSP stapling -- serve certificate status inline, faster TLS handshake
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
# HSTS -- tell browsers to always use HTTPS (2 years)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# Proxy to backend
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Obtaining certificates
Recommended certbot (Let's Encrypt)
The official ACME client from the EFF. Has an Nginx plugin that auto-configures the server block.
# Install and obtain cert
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com
# Auto-renewal (certbot installs a systemd timer)
sudo certbot renew --dry-run
Alternative acme.sh
Lightweight shell-based ACME client. No root required. Supports DNS-01 challenges for wildcard certs with dozens of DNS providers. Note: since v3.0 (August 2021), acme.sh defaults to ZeroSSL as its CA. To use Let's Encrypt instead, add --server letsencrypt or run acme.sh --set-default-ca --server letsencrypt.
# Install
curl https://get.acme.sh | sh
# Issue cert via webroot
acme.sh --issue -d example.com -w /var/www/example.com
# Issue wildcard via DNS (Cloudflare example)
export CF_Token="your-api-token"
acme.sh --issue -d example.com -d '*.example.com' --dns dns_cf
# Install cert to Nginx paths
acme.sh --install-cert -d example.com \
--key-file /etc/nginx/ssl/example.com.key \
--fullchain-file /etc/nginx/ssl/example.com.pem \
--reloadcmd "systemctl reload nginx"
Be careful with HSTS preload. Once your domain is on the HSTS preload list, browsers will never allow HTTP connections to it until you are removed. Removal is technically possible via hstspreload.org/removal/, but the process takes months and requires waiting for browser release cycles. The includeSubDomains requirement means all subdomains are affected. Only enable preload when you are certain every subdomain supports HTTPS.
Proxying Gitea/GitHub
Putting Nginx in front of a Git hosting service like Gitea, Forgejo, or GitLab is straightforward for the web UI and HTTPS git operations. The complication is SSH.
What works through Nginx
- Web UI — browsing repos, pull requests, issues, admin panel. Standard HTTP reverse proxy.
- HTTPS git clone/push —
git clone https://git.example.com/user/repo.gitworks through Nginx like any other HTTP request. - Git LFS — Large File Storage uses HTTP. Works through Nginx but needs a larger
client_max_body_size.
What does NOT work through Nginx
Nginx is a Layer 7 (HTTP) proxy. It cannot proxy raw TCP protocols like SSH. When a user runs git clone git@git.example.com:user/repo.git, that SSH connection on port 22 goes directly to the host, bypassing Nginx entirely.
Nginx config for Gitea
server {
listen 443 ssl;
http2 on;
server_name git.example.com;
ssl_certificate /etc/letsencrypt/live/git.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.example.com/privkey.pem;
# Large uploads for Git LFS and repo pushes
client_max_body_size 512M;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Needed for Git over HTTP (smart HTTP protocol)
proxy_buffering off;
proxy_request_buffering off;
}
}
SSH passthrough options
Option 1 Gitea SSH on port 22
If Gitea is the only service on the host, configure Gitea to listen on port 22 directly (or use a Docker port mapping -p 22:22). Move the host's SSH to a different port (e.g., 2222) in /etc/ssh/sshd_config.
Option 2 Non-standard SSH port
Run Gitea SSH on a non-standard port (e.g., 2222). Users clone with ssh://git@git.example.com:2222/user/repo.git or configure their ~/.ssh/config to use port 2222 for that host.
Option 3 SSH passthrough with Docker
When running Gitea in Docker, a common pattern is to install Gitea's SSH shim on the host. The host's git user forwards SSH connections to the Gitea container. This allows both the host's SSH (for admin) and Gitea's SSH (for git) to coexist on port 22. See the Gitea documentation on "SSH container passthrough."
There is no way to make Nginx proxy SSH traffic. Nginx operates at L7 (HTTP). SSH is a completely different protocol. If you need a TCP/L4 proxy for SSH, look into HAProxy or Nginx's stream module (a separate module that must be compiled with --with-stream; many distro packages include it, but it is configured outside the http {} block).
Proxying Proxmox
Proxmox VE's web UI is notoriously tricky to reverse proxy. The main UI works with a basic proxy_pass, but the noVNC/SPICE console uses WebSocket connections that break without specific Nginx configuration.
The console problem
When you open a VM or container console in Proxmox, the browser opens an iframe that connects to a WebSocket endpoint (typically /vncwebsocket or /termproxy). If Nginx doesn't forward the Upgrade and Connection headers, the WebSocket handshake fails and you see a blank console or a "connection refused" error.
Working Proxmox reverse proxy config
upstream proxmox_backend {
server 10.0.1.100:8006;
}
server {
listen 443 ssl;
http2 on;
server_name proxmox.example.com;
ssl_certificate /etc/letsencrypt/live/proxmox.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/proxmox.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
# Proxmox uploads (ISO images, backups)
client_max_body_size 4G;
# Main Proxmox web UI and API
location / {
proxy_pass https://proxmox_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Required for WebSocket (noVNC console)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Disable buffering for real-time console output
proxy_buffering off;
# Long timeouts for console sessions
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
# Proxmox uses a self-signed cert by default
proxy_ssl_verify off;
}
}
Key directives explained
| Directive | Why it matters |
|---|---|
proxy_http_version 1.1 | Required for WebSocket upgrade. HTTP/1.0 does not support the Upgrade mechanism. |
Upgrade $http_upgrade | Passes the WebSocket upgrade header from the client to the backend. Without it, Nginx strips the header and the console fails. |
Connection "upgrade" | Tells the backend to switch from HTTP to the WebSocket protocol. |
proxy_buffering off | Disables response buffering. Console output must stream in real-time, not be batched. |
proxy_ssl_verify off | Proxmox uses a self-signed certificate on port 8006. Nginx would reject it without this. In production, consider adding Proxmox's CA to Nginx's trust store instead. |
proxy_read_timeout 3600s | Default is 60s. A console session that's idle for 60 seconds would be killed. Set this high. |
Note the proxy_pass https:// (not http://). Proxmox's API listens on HTTPS (port 8006) with a self-signed certificate. If you use http://, the connection will fail because Proxmox does not serve plain HTTP.
Proxmox uses /api2/json/ for its REST API and /vncwebsocket for console WebSocket connections. You don't need separate location blocks for these — a single location / with WebSocket support covers everything. If you want to restrict API access, you can add auth or IP restrictions to location /api2/.
Common Pitfalls
Gotcha client_max_body_size
Default is 1 MB. Any upload larger than this gets a 413 Request Entity Too Large error. This catches everyone who puts Nginx in front of a file upload service, Git server, or CMS. Set it explicitly:
client_max_body_size 100M;
Gotcha proxy_read_timeout
Default is 60 seconds. Long-running requests (report generation, large file downloads, SSE streams) get killed with a 504 Gateway Timeout. Increase for slow backends:
proxy_read_timeout 300s;
proxy_send_timeout 300s;
Performance Buffer sizes
If your backend sends large headers (big cookies, long JWTs), Nginx logs "upstream sent too big header" and returns 502 Bad Gateway. Increase buffer sizes:
proxy_buffer_size 16k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
Performance Upstream keepalive
By default, Nginx opens a new TCP connection to the backend for every request. For high-traffic proxying, enable keepalive connections to the upstream:
upstream backend {
server 127.0.0.1:3000;
keepalive 32;
}
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
Operations Forgetting to reload
Editing config files does nothing until you reload Nginx. Always test the config first, then reload:
# Test config syntax (catches errors before they cause downtime)
nginx -t
# Reload gracefully (no dropped connections)
nginx -s reload
# Or via systemd
systemctl reload nginx
nginx -s reload is a graceful reload — existing connections finish, new connections use the new config. It does not restart the process. There is zero downtime.
Always run nginx -t before nginx -s reload. If the config has a syntax error and you restart Nginx (instead of reload), the process will fail to start and your site goes down. The -t flag validates without applying changes.
Production Checklist
- TLS everywhere — redirect HTTP to HTTPS. Use TLS 1.2+ only. Disable SSLv3, TLS 1.0, and TLS 1.1.
- HSTS header — add
Strict-Transport-Securitywith a longmax-ageto prevent protocol downgrade attacks. - Set
client_max_body_size— the 1 MB default will break file uploads. Set it per-server or per-location based on your needs. - Proxy headers — always set
X-Real-IP,X-Forwarded-For,X-Forwarded-Proto, andHostwhen proxying. - WebSocket support — if your app uses WebSockets, configure
UpgradeandConnectionheaders andproxy_http_version 1.1. - Timeouts — set
proxy_read_timeoutandproxy_send_timeoutappropriate for your backend's response times. - Gzip compression — enable for text-based content types. Check that
gzip_typesincludes your response formats. - Access and error logs — configure log rotation (
logrotate) to prevent/var/log/nginx/from filling your disk. worker_processes auto— match worker count to CPU cores. Don't hardcode a number unless you have a specific reason.worker_connections— default is 512. Set to 1024 or higher for production. Each worker can handle this many simultaneous connections.- Rate limiting — use
limit_req_zoneandlimit_reqto protect against brute-force and DDoS attacks on login endpoints. - Security headers — add
X-Frame-Options,X-Content-Type-Options,Referrer-Policy, andContent-Security-Policyas appropriate. - Test before reload — always
nginx -t && nginx -s reload. Never blind-reload in production. - Certificate auto-renewal — verify certbot or acme.sh timers are running. A forgotten renewal means an expired cert and a site outage.
- Separate server blocks — one file per domain in
/etc/nginx/sites-available/, symlinked tosites-enabled/(Debian/Ubuntu convention). On RHEL/CentOS or upstream Nginx packages, use/etc/nginx/conf.d/*.confinstead. Don't cram everything intonginx.conf.