This is a tutorial for creating a secure HTTPS setup with nginx and Let’s Encrypt. So, with the introduction of Let’s Encrypt it has become possible to add a trusted SSL certificate for all of your sites for free, which is a fantastic development. Not only can the encryption a certificate provides keep your e-commerce clients’ transaction data safe, with privacy becoming more and more of a concern these days, encrypting even mundane traffic helps ensure the privacy of visitors to your sites.
But like many things in life, you can set up HTTPS and you can set up HTTPS well. Many encryption ciphers have been found to be vulnerable, and it is important to make sure that these ciphers cannot be requested by a potentially hostile client; with Let’s Encrypt, certificates are only valid for 90 days, so setting up automatic renewal is also handy to ensure those certs never lapse. Fortunately, nginx (version 1.8.1 is used in these examples) has some simple configuration options to tighten up security! I’ll discuss the nginx side of setup in this post, and go into getting Let’s Encrypt set up and your first cert in place in part 2.
This assumes you already have nginx up and running on your server in some capacity; check out http://robido.com/nginx/how-to-install-web-server-centos-7-using-nginx-php-fpm-mariadb-firewalld/ if you’re not quite there yet. First, we’ll need to create a Diffie-Hellman parameters file; this is used for Perfect Forward Secrecy, a pretty cool protocol that means that even if someone is listening to the entire initial SSL setup, they still can’t recover the key for decryption. On a server with only 1 or 2 CPU cores, this might stress your system out (and it’ll take a while in any event), so if you’re on a low-end VPS, you may want to wait until later in the evening to run this!
Do the following as root (the paths might differ here on Ubuntu/Debian):
openssl dhparam -out dhparam.pem 4096
You should now have a new file at /etc/ssl/certs/dhparam.pem and now we’ll tell nginx to use it, and tighten up some other SSL/TLS related settings. Go ahead and create a new file as root in /etc/nginx with your favorite text editor — I like “secure-ssl.conf” but as long as it ends in “.conf” you’re good — and paste in the following:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#Alternate cipherlist if we only want to support more modern browsers and OS combos to further strengthen HTTPS:
I’ll break down each of these options in a little more detail, in case you’re the sort of person who doesn’t like blindly pasting configurations with no idea what’s going on 🙂
SSLv2 is insecure, and SSLv3 causes a vulnerability via TLS1.0 (TLS1.0 is vulnerable to a downgrade attack that can force the server to change to SSLv3), so we explicitly enable only TLSv1, TLSv1.1, and TLSv1.2.
Under certain SSL/TLS protocol versions, the client’s preference of encryption cipher is used instead of the server’s. This is a vulnerability if your client intentionally specifies a weak cipher, so this forces nginx to only use the ciphers we specify below.
This tells nginx to use the Diffie-Hellman parameters file we generated earlier for PFS.
You’ll note a couple options here for this parameter — the commented-out, much smaller list can be used if you do not care about supporting older clients (in particular, Windows XP and IE6), as most of the long string you see in the other ssl_ciphers option deals with ordering ciphers in order of security (and explicitly discarding ciphers known to be totally broken at this point). In essence, this list attempts to prioritize:
- ECDHE+AESGCM ciphers are used whenever possible. These are TLSv1.2 ciphers with no currently known weaknesses.
- Perfect Forward Secrecy ciphers are preferred, with ECDHE over DHE.
- AES128 is used over AES256. This may seem backwards, but currently, AES128 has no serious known vulnerabilities, and is noticeably quicker than AES256. This may change as CPU power continues to increase.
- AES is preferred over 3DES, due to the weaknesses in 3DES in many situations. In the shorter cipher list, 3DES is totally excluded.
- RC4 is removed entirely due to the large number of vulnerabilities; RC4 should be considered entirely broken at this point.
These parameters enable nginx’s session cache. Because of the fact that a majority of the overhead associated with an HTTPS connection is in the initial setup, caching these connection parameters can vastly improve response times for successive hits from the same client. Here we create a 50MB cache (documentation claims 1MB can store an estimated 4K sessions, so this may be overkill) and time out SSL sessions after 30 minutes.
These options implement OCSP stapling, a way to speed up the initial SSL handshake by letting the server announce to connecting clients that it is capable of sending a cached version of an OCSP record, normally something that the client has to connect separately to the CA to retrieve (and something that often times out or fails entirely!)
Once these are all in place, go ahead and test your nginx configuration with “sudo nginx -t”, and if all looks good, issue a “sudo nginx -s reload” to pick up the new configuration. nginx is now configured to securely handle HTTPS traffic, but we still need to get it a certificate and key to use, and set up a secure vhost (your nginx package may have come with a default SSL vhost set up, but we’ll blow that out to make sure it doesn’t override our settings here).