How to secure a Linux system
Every Linux system will benefit from more security, especially if it contains sensitive data. With so many resources available on the internet, one might think that securing Linux has become easy. We know it is not.
Linux system hardening takes a good amount of understanding about how the Linux kernel works. It also requires a good understanding of the operating system principles. In this guide, we will help you to get this understanding and provide you with tips and tools. The final result should be a secure Linux server or desktop system.
After completing this guide, you will know more about:
- What system hardening means specifically for Linux
- What steps can be taken to improve the overall security of your system
- Why technical audits are a powerful way to keep you secure
- How to do regular technical audits on Linux systems
Let’s start with Linux hardening!
Table of Contents
Linux system security
Before we start, let’s do a quick introduction to the main subjects. After all, good understanding starts with knowing the key concepts.
No system can be secure if it was not tested. One of the testing methods is by performing a security audit. An audit is typically focused on business processes or on the implementation of technical security measures. This last type of audit is also called a technical audit.
This luxury word is actually nothing more than how close are you to a particular policy document or technical baseline. Your baseline may state that every system should have a firewall. Part of the compliance check is then to test for the presence of a firewall.
The process of improving your security defenses is called system hardening. This means the addition of new defenses and improving existing ones. It may even include the removal of components, to keep the system tidy and clean.
Linux hardening steps
So with system hardening, we focus on the presence of security measures for your system. There are many technical aspects to it, but there are a few key principles. Let’s have a look at them first.
Minimizing your resources
Every system has a footprint. Similar to a real footprint, it is the size that the system when it comes to risk. The bigger the footprint, the more risk that is involved.
Reduce installed software
Typically we can remove things on the server that are no longer needed. For example, some software package might have been installed to do testing. If this package and its software components are no longer used, then it typically makes sense to remove it. A software package that is not installed, can not impact our risk.
Disable and remove unused accounts
Too many companies still have accounts active that should not be there at all. That colleague that left, but the manager forgot to request deletion of the accounts and related permissions. Therefore it makes sense to have technical controls in place to disable accounts. If you have a colleague that leaves the company, have a tool like Ansible disable the account.
For technical teams, it might be good to have strict rules on the usage of accounts. For example, is a personal account allowed to run software on a system for more than a single task? Too often, a developer or system administrator starts a process with their own user, instead of a functional account. After the colleague leaves the company, the account is terminated. At some moment the processes started by the account stop working and a business process is disrupted.
Regular audits and cleanups can reduce these risks. So a strict hygiene when it comes to disable and remove unused accounts may help.
When deploying services, go for a ‘default deny’ type of access. That means no one gets access, except those that are specifically listed. This can apply to file permissions, firewall rules, and access to data. For every new service, consider if this principle can be applied.
Remove identification and application versions
Too many software components proudly share its name and version. While it looks innocent, it provides attackers with valuable data. It is not that hard to obtain the operating system that is used. When also learning about the used software components, it becomes much easier to see if there are specific attacks available. Hiding software banners and version numbers will also stop most automated attack scripts, as they often go on the hunt for a specific version.
Relevant post: Hide the nginx version
Adding new security measures
Prevention or detection?
After reducing the footprint of the system, the next step is to add relevant security measures. Typically you want to select them by category first. This category defines if a measure helps with prevention or focuses on detection. For example, an antivirus scanner typically will do detection. If it has on-access scanning and can save your system from an infection, it also helps with prevention. A firewall denies access to unneeded network ports, so this is prevention. While prevention sounds like the best option of the two, that is not necessarily true. This reason is that not everything can be prevented. So security defenses that focus on detection are needed as well.
Topics of interest
When adding new security measures, there is a lot to chose from. Let’s look at some of the available technical measures you can take.
The Linux kernel itself is responsible for policing who gets access to what resources. This is a difficult task, as there needs to be an optimal balance between performance, stability, and security. The kernel can be configured in two ways. The first is during compilation, the build process to create the kernel and its modules. The second option is using the sysctl command or its related /proc file system. Learning about the available kernel security features may be a valuable step in securing your Linux system.
Securing processes and their capabilities
Processes are the workers on the system. They typically have a clear task to fulfill, often with some form of data processing being involved. As processes may have access to sensitive, in this area we can make an educated choice how the Linux kernel handles core dumps. Core dumps are files that represent how a part of the memory looked before an application or process crashed. If you are dealing with a system with a lot of sensitive data, then usually you want to restrict the creation of these files.
When hardening a Linux system, one of the first steps is to look at the network traffic that comes in and goes out. If you are using a cloud server, then your neighbor systems might not be as friendly as the ones in your own home network. So it is wise to filter out unwanted network traffic, or better, only allow wanted traffic.
With network traffic, there are two directions possible: incoming or outgoing. Incoming traffic is that from other systems that want to talk with your system. This is also called ingress filtering, where you want to make sure that the source address (the sender) is valid. Let’s say your system has two network interfaces. Interface 1 is connected to your internal network (e.g. 192.168.1.0/24) and interface 2 that is connected to your internet connection. When someone pretends to be on your local network, but the traffic was received on interface 2, then something is wrong. By ingress filtering we deny this type of network traffic, to prevent this so-called spoofing attacks.
Egress filtering, which applies on outgoing traffic, requires a good understanding of the protocols used on your network. Most systems use the following services:
- DNS traffic for resolving names and IP addresses (port 53, UDP and TCP)
- Outgoing email (port 25, TCP)
- Time synchronization (port 123, UDP)
- HTTP and HTTPS for retrieving updates (port 80/443, TCP, sometimes UDP)
Filtering all outgoing traffic can be a good way to prevent malicious traffic, especially when filtering outgoing HTTP/HTTPS traffic. It may prevent an attacker to download their malware from some system on the internet.
Use the localhost interface
Linux systems have a loopback interface named lo. Typically the hostname localhost will resolve to the 127.0.0.1 address linked to it. This interface is often used for network-based services that do not have to be publically available. For example, a web application and a database engine may use a socket file or use this localhost interface to set up a connection. For that reason, you firewall configuration typically will have to allow all traffic on the 127.0.0.0/8 network. Did you know that you can also use an address like 127.1.2.3 as local address?
With the ip command you can show the details of this interface. Command:
ip addr show lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
The output shows that this system has a ‘lo’ interface. The inet line shows an IPv4 address of 127.0.0.1. It has an address with CIDR notation /8, meaning that 127.*.*.* are linked to it. It also has an IPv6, which is represented by the inet6 line. Both addresses have a scope of ‘host’, meaning this address is only available to the system itself. The IPv4 address has a special ‘lo’ scope, so this address will be marked as the primary loopback address.
Tip: If your system is still using iptables, have a look at ipset. This extension allows iptables to use lists that can be used to block IP addresses and even full network segments.
Email and messaging
While most Linux systems are not fulfilling the role of a mail server, it is common to see a Mail Transfer Agent (MTA) like Sendmail, Exim, or Postfix. Typical usage includes the delivery of system-related messages to a central mailbox or system administrator. Sendmail was the most popular choice in the early days of Linux. That has been changed with Postfix being in the lead now. Improve Postfix security by applying measures like encryption, spam filtering, and blacklists.
Most software has to be configured before it can be used. While a default configuration may work, it is not always the best configuration when it comes to security. For example, the MongoDB database engine did not require authentication. The result is that even unauthorized and anonymous users can see all data stored. Not a good way of protecting your precious data.
Security best practices for applications
When you are using well-known applications like Apache, MySQL, and Postfix, then you can be fairly sure that there is detailed documentation. Some even include a specific section on security. This alone can be a valuable resource to learn about security principles and how they apply. So have a look at the documentation of any software component you are actively using, especially those listening on a network port. Hunt down the security section and make an action list of what things to secure.
- Read application documentation for security measures
- Restrict network services when possible
- Use localhost connections for non-public network services
- Disable default users
- Set up authentication
- Use strong passwords
Ongoing security measures
Keep the system up
Most systems have the goal to deliver value to business processes. One of the main pillars of information security is the availability of a system. A system that is down, can be a risk to the business in multiple ways. So set up monitoring with a tool like Nagios, Prometheus, or Zabbix.
Create regular backups to ensure the availability of data. When a system goes down for whatever reason, then you have at least the data to do a recovery. It goes without saying, but a backup is as good as its restore. If you can’t restore it, then it is not a backup.
- Create regular backups (and test restores)
- Implement system monitoring
Apply thy software patches
New software updates are released on a daily basis. They add new features, resolve bugs and security issues. Most package managers on Linux can show the available updates. Some can even show which updates are security-related.
When possible use automatic updates, especially for those packages that are related to security issues. Don’t forget to reboot when the kernel is updated. Otherwise, the system remains vulnerable. If you have a critical machine that needs to keep running, consider using live kernel patching with livepatch.
When using a stable version (e.g. Ubuntu LTS), upgrade to the next version before its official support is ended. Don’t wait till the latest moment, but plan ahead and perform those upgrades.
Perform automated audits
Almost every system administrator is overwhelmed with the amount of work and activities. While this puts them under some stress, it will also increase the risk that “less important” things like installing patches are forgotten. Cleaning up a system after an intrusion, or having to install it (again) is usually a waste of time. Switch from being reactive to a more proactive approach. Implement continuous audits, automate controls and use best practices. To secure a Linux system and keep it secure, focus on the right combination of hardening and auditing. This magic combination will be a powerful tool against evildoers.
- Perform system health scans (auditing, vulnerability scanning, performance checking)
- Implement manual checks (focus on one item each time)
With so many things to do in a day, it is easy to forget about security. Fortunately, there are a lot of open source tools available that can assist. Let’s say you have a website and use an SSL certificate. There is a security tool available for most parts of the system and its software. So be creative and find a tool for every aspect that you can think of. Don’t know where to start? This top 100 of security tools might give you some inspiration.
Do you have any other resources that are helpful to other readers? Please add a comment below.