NGINX
The load balancer is the only server with services listening on the Internet. This requires specific configuration on the OTHER servers to prevent them from listening on public networks that may be attached.
Many writeups exist explaining now to turn NGINX into a load balancer. I utilize snippets to define potential destinations based on accessed locations in NGINX configuration. This way I can re-use the snippets in multiple locations.
/etc/nginx/snippets/ssl.conf
Out of scope for this how-to, but it keeps SSL configurations neatly out of site configurations. Maybe I'll write another article explaining how to get an A+ with SSL Labs.
/etc/nginx/snippets/varnish.conf
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_hide_header X-Powered-By;
more_clear_headers Server;
more_clear_headers X-Logged-In;
more_clear_headers Via;
more_clear_headers X-Varnish;
more_clear_headers X-Cache;
more_clear_headers X-Cache-Hits;
proxy_read_timeout 300;
proxy_pass http://upstream_varnish;
proxy_redirect http://upstream_varnish https://$host;
/etc/nginx/snippets/liveserver.conf
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://upstream_liveserver;
proxy_read_timeout 3600;
proxy_connect_timeout 300;
proxy_redirect http://upstream_liveserver https://$hostname;
proxy_http_version 1.1;
proxy_set_header Connection "";
/etc/nginx/sites-available/default.conf
# port 80 forwards all traffic to 443
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.yourdomain.com yourdomain.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.yourdomain.com yourdomain.com;
ssl_certificate /path/to/your/fullchain.pem;
ssl_certificate_key /path/to/your/privkey.pem;
include /etc/nginx/snippets/ssl.conf;
upstream upstream_liveserver {
least_conn; #only if your backend servers are NGINX
server liveserver weight=5;
# server liveserver1;
# server liveserver2 backup;
}
upstream upstream_varnish {
server varnish weight=5;
# server varnish1;
# server varnish2 backup;
}
location / {
include /etc/nginx/snippets/varnish.conf;
}
location ~ ^/administrator {
include /etc/nginx/snippets/liveserver.conf;
}
}
Though simplified for this write-up, this is the basic configuration of the load balancer. It's not anything special, or complicated. Populate your upstream servers, modify the path to your certificates, update the server name, maybe add some access and error logs and you're good to go.
It might look like ALL traffic outside of /administrator is being cached with Varnish, but we'll handle cached and non-cached URLs in the next step.
Creating an NGINX configuration file on the actual backend server is well-documented elsewhere.