Nginx security hardening guide
Learn how to secure your nginx configuration with this hardening guide. It includes examples and tips to implement security measures step by step.
Why harden your nginx configuration?
Nginx is known for its speed and modular support. It even has multiple security safeguards to prevent or limit common attacks. That’s a great start, but not enough for a production system with sensitive information stored on it. Even if you don’t host sensitive information, having it hacked is no fun, right?
In this guide we go step by step and secure the nginx configuration. Each time a small security measure is implemented, making the hosted websites a bit more secure.
Warnings and tips
Before we start making changes to the system, it is a good idea to have a good backup. Each step might break websites, so take it easy with the deployment and monitor your log files.
- Make a backup of the nginx configuration files
- Ensure that access logging is enabled (should not be disabled)
- Some configuration parameters need to be made in the http definition, while others are placed in server or location definition
- Before restarting, use
nginx -t
to test the new configuration - For some changes a restart of nginx is needed, while for most a reload is sufficient
- As every system is different, have a good look at your situation before making changes
- Consider the type of clients that need to connect to your web server, such as desktop/mobile and their support for newer technologies
- Apply hardening to your full application stack (OS, firewall, backend applications), as nginx is just one part of it
Generic changes
Test your configuration
Before we make any changes, let’s start with testing the configuration.
nginx -t
Want to see the full configuration, including a test?
nginx -T
If no warnings or errors show up, we can continue!
Disable nginx version number
information leakageTypically it is better to reveal as less as possible when it comes to running software components. A good starter with nginx is to disable the version. This is done in the http definition within the configuration. This section is usually part of the /etc/nginx/nginx.conf file.
http {
server_tokens off;
# Other HTTP configurations options
}
HTTPS configuration
Most websites run nowadays on HTTPS. There is almost no reason to run just on HTTP, especially with SSL certificates being available for free.
Basic SSL configuration
data encryptionThe first thing to configure is the SSL certificate and the related key. These need to be obtained from your Certificate Authority (CA). This can be from your own organization or an external one like Let’s Encrypt.
http {
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/example.com.crt;
ssl_certificate_key /path/to/example.com.key;
# Other SSL settings (TLS versions, cipher suites)
}
}
Enable OCSP
performanceEnable OCSP to improve the performance of the TLS connection.
Rationale to enable OCSP
OCSP is the abbreviation of Online Certificate Status Protocol. It checks the validity status of a certificate in real-time, so a client does not have to use a revocation list. OCSP stapling improves the performance of the validation checks by using a signed and time-stamped version of the OCSP response. This is stored on the web server and refreshed on a regular schedule. The result of this verification is provided during the handshake and reduces another validation step on the system of the user.
http {
server {
# OCSP
ssl_trusted_certificate /path/to/example.com.crt;
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 [2606:4700:4700::1111] [2606:4700:4700::1001];
}
}
Replace the resolver with name servers that you trust and have a good performance
TLS versions
data encryption performance
Disable older TLS versions.
Rationale to limit TLS protocols
Implement modern TLS protocol versions and disable those with known issues. Not sure what protocols are currently used? Log them using the $ssl_protocol and $ssl_cipher variables via the log_format function.
http {
server {
ssl_protocols TLSv1.2 TLSv1.3;
}
}
Cipher suites
data encryptionUse strong ciphers and consider performance.
Rationale to define cipher sets
A cipher suite is a set of algorithms. It helps to secure the network connection and uses the TLS protocol within the nginx configuration. When selecting the right set of ciphers, one has to look at ciphers that are considered to be secure, but also have a good performance.
http {
server {
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
}
}
This set of ciphers comes from Mozilla and has a good compatibility with clients. If you have a specific virtual host that only requires very modern clients to connect, consider disabling TLSv1.2 (earlier section) and use a more restricted set of ciphers.
In the past, many guides suggested to let the server decide what cipher to use. But when it comes to performance, typically the client can make a better decision. For that reason the ssl_prefer_server_ciphers is set to off.
Useful links:
Enable Kernel TLS offload
performanceConsider performance for a better user experience.
Rationale to enable KTLS
KTLS is a method to offload the handling of TLS operations in the kernel itself, instead of a user process like nginx. This may increase the performance of the TLS processes, such as handshake, data encryption, etc.
Requirements:
- Linux >= 4.13
- NGINX >= 1.21.4
- OpenSSL >= 3.0.0
http {
sendfile on;
server {
# Enable kernel TLS
ssl_conf_command Options KTLS;
}
}
Restrict access
authentication limit access
Limit access to resources by defining which systems or users can access them.
Using IP address
When some resources can be easily filtered by an IP address, use a combination of allow with the deny all.
http {
server {
location /mystatus {
stub_status on;
allow 1.2.3.4;
deny all;
break;
}
}
}
Using basic authentication
authentication limit access
When multiple users (with different IP addresses) need to access a specific location, basic authentication could be used. This is definitely not the most secure type of authentication and access control, but for some endpoints it could be an additional security measure to restrict access. If there is an application running behind the endpoint, then use that to arrange authentication and authorization.
http {
server {
location /secret {
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
# Other location directives
}
}
}
The .htpasswd file can be provisioned with a combination of usernames and password using the htpasswd
command.
Rate limiting
system resourcesTo prevent aggressive clients, implement rate limiting. Typically this is done at the http definition in combination with the server or a particular location. It is also possible to define multiple limit rates, depending on the specific paths and how many HTTP requests are common.
http {
limit_req_zone $binary_remote_addr zone=globalratelimit:10m rate=10r/s;
server {
location / {
limit_req zone=globalratelimit burst=20;
# Other location directives
}
}
}
Protocols and methods
Disable older protocols
limit access system resources
Most clients use currently the HTTP/1.1 or HTTP/2.0 protocol to connect to a web server. The older HTTP/1.0 is really old and lacks a wide range of features:
- lack of data compression
- limited set of HTTP error codes
- multiple requests per connection
If you don’t need to support these very old clients, then you could consider blocking them and reduce data traffic.
http {
server {
location / {
# No longer accept HTTP1.0 requests, show 426 (Upgrade Required), needs to be in location block as we are adding headers.
if ($server_protocol = HTTP/1.0) {
add_header Upgrade "HTTP/2.0";
add_header Connection "Upgrade";
return 426 "Upgrade Required: Upgrade your client to support more modern HTTP protocol";
break;
}
}
}
}
Define allowed HTTP request methods
limit access security
When we host static files, we usually don’t need to support all HTTP request methods, such as POST, PUT, DELETE, and CONNECT. Only HEAD and GET are enough to serve the files. Nginx has a way to define that you want to deny access to it, unless it is on the list of allowed request methods.
http {
server {
location / {
# Deny access, unless it is GET (HEAD is included with GET)
limit_except GET {
deny all;
}
# Other location directives
}
}
}
Limit access to sensitive data
limit access information leakage
Most websites will have a combination of HTML files, CSS and JavaScript. Other file types might be present in a directory structure, especially when using something like WordPress. If we want to restrict access to these files types, we can define a location in the server definition. It should be placed above the other location definitions, so it gets tested first. By using the break keyword, we tell nginx to stop parsing the request if we have a match.
http {
server {
# Restrict access to some file types
location ~ \.(7z|asp|aspx|bak|bz|bz2|cer|cgi|conf|crt|gz|ini|jsp|key|log|pem|php|php7|rar|sh|sql|tar|txt|zip)$ {
return 403;
break;
}
}
}
Have a good look at the list before deploying it. If you host PHP, then you want to remove that extension most likely from the list. If you host a directory structure with text files (.txt), then remove that as well. The dot itself is escaped, otherwise a request like /do-you-like-martini will be denied as well (.ini).
Blocking common exploits
securityThe web is a great place, and also for malicious bots that scan your website(s). With some automation they scan the web looking for vulnerable websites. Fortunately it is fairly easy to block many of these common attempts. If you want, you can even go a step further and block repeating offenders.
There are a few ways to set up filters to block malicious attempts. We like the method of using a map that compares the requested URI and looks for a match. If there is a match, we then can decide what to do. Let’s start with creating a map. If you have just one virtual host, you can define this above the server definition. Another option is to create a separate file that we then include. For this example we use the latter.
map $request_uri $is_blocked_common_exploits_path {
"~*//" 1;
"~*(boot.ini|etc/passwd|self/environ)" 1;
"~*(%2e%2e|%252e%252e|%u002e|%c0%2e)" 1;
"~*(\.\./\.\./|\.\.\.|%252e%252e%252e)" 1;
"~*(~|`|<|>|:|;|{|}|\[|\]|\(|\))" 1;
default 0;
}
This map will look at some paths, including double slashes, some system files, double encoded dots, and finally single characters that are often part of file inclusion or path traversal attack. The default 0; sets the value to zero if there is no match.
If there is a match found (comparison happens with the $request_uri), then the variable $is_blocked_common_exploits_path will be 1. The next step is to take an action the match.
http {
include /path/to/block_common_exploits.conf;
server {
location {
# Blocked URLs from our generic set of common exploits
if ($is_blocked_common_exploits_path) {
return 403 "Request blocked.";
break;
}
}
}
}
Note: if you want to apply the rules to all virtual hosts, consider adding it at the highest level.
Configuration tests
External
SSL Labs
The well-known SSL server test from SSL Labs can help with testing your SSL configuration. It usually takes a few minutes to complete.
Security Headers
The Security Headers website provides a quick way to scan your website and test available response headers.
Additional hardening for nginx
There is more to do and changes will be made to this guide. Found something that should be included as well? Let it know!
- Define CSP
- Set headers
- Block clients without Accept-Encoding header
- Limit access logging
- Create an AppArmor profile
- Harden nginx systemd service unit
- Apply the nginx hardening profile for systemd