Linux System Integrity Explained: Ensure Data, Logging and Kernel Integrity
Linux System Integrity Explained
From Data and Logging, up to Kernel Integrity
Systems exist for one primary goal, which is processing data. Information security helps protecting this valuable data, by ensuring its availability, integrity, and confidentiality. In other words, data should be available when we need it. Then it should be properly transmitted and stored, without errors. Our last goal ensures that it is only available to those with a need to know. Many open source software components are available to help with these goals. We will review a few of them and see how they fit in your security defenses.
Know what to protect
You can’t protect something properly, if you don’t know it. Same applies for data and especially the type and underlying value of the data.
So data (or information) is valuable to you, or a business. However, what is valuable for you might be useless for someone else. At the same time something valuable to you, might be even more valuable to others. Think of your customer database, your financials and your strategies.
So before applying technical measures, get clear what it is that you are protecting.
Some questions to help determine what you are protection, is first by determining what you have.
Common data types are health information, credit card details, personal information, contact details, trade secrets, public data, etc.
- What kind of data have you stored on the systems?
- Is there any sensitive data involved?
Now next step is to determine where this data is stored. This helps later with selecting the right measures, depending on the data type.
- What systems (don’t share a name, just the type like: webserver) would have sensitive data?
- What systems have the most valuable data stored?
Security is risk management
After answering these questions above, the result could be a spreadsheet with multiple categories of data types and systems. One way to sort the sheet is by system. This should tell what types of data are stored on it. It helps with performing a risk assessment and focus first on those systems which handle more sensitive data.
Time to check what measures we have and what kind of data it protects.
If you process sensitive data, then the right combination of hardware and software should be used. For example, memory and disks have nowadays more reliable measures to ensure that all bits of data are correct. Any incorrect bit will be detected and reported. Then there are multiple levels of integrity mechanisms available within software.
Database software can have atomic transactions, meaning that data should be committed to memory/disk only if all parameters are right. This helps greatly with making changes to data, while an relying piece of information was not stored (yet).
When we look at the disk itself, the hardware has features available to properly align between performance and security. Depending on how sensitive the data is, make sure systems and disks in particular have the time to perform a shutdown. This is especially valuable when there is a power outage.
File system integrity
The last category worth mentioning is tooling which monitors file changes. This helps with general system integrity, by alerting when files have been changed. It is a great measure to detect intruders, but also ensure that changes to configuration files are properly documented. Unauthorized changes should be detected, so proper response actions can be taken.
In the same category there is the file system itself. When using newer file systems like EXT4, Btrfs and ZFS, options are available to guarantee the integrity of the data. Blocks which are damaged are early detected and disabled, to prevent malformed storage. They even can solve issues caused by the underlying disks, although that is to some extent.
What measures to select?
These measures are all worth using and are available and used on most systems already. At least you want to have the right disks and RAID level. On top of that the right memory modules, depending on the goal of the system. When highly sensitive data transactions occur, you might want to invest in memory modules with error detection (ECC).
Next level of measures is on the storage. If data is important to you, it should be stored on the right type of storage. Select a stable storage solution, with the proper RAID level. It might be local disks, or network based storage. In any case, the value of the data should provide a guideline on what storage level is adequate.
On the file system level, pick the one which has the right performance, yet also protects data. Consult the details of the file system you are using and how to tune for it. If data needs more protection, check that the specific file system options are used, like disk journaling.
- Check where data is stored
- Use the right hardware, optimized to the data processed and stored
- Apply settings of your file system
- Tune databases
The Linux kernel is the core of the operating system. It includes device drivers, system functions, memory management and much more low-level functions. To properly protect this core, we can take several measures.
Protect the file system
To protect the integrity of kernel, we should protect the areas where it is stored. A few important boot files, are usually stored in /boot.
- Mount /boot read-only
- Monitor for write activities on /boot, /lib and user libraries
Besides the kernel itself, programs needs to be protected as well. Usually these are stored as binaries on disk. Depending on the permissions, users and processes can run a binary, which then perform a certain function.
To ensure only allowed binaries can be executed, the IMA/EVM system allows for denying unsigned binaries. This way malware and unauthorized binaries can no longer be executed. As this is an extensive subject, more posts about this will follow!
Another more common method is using file integrity tools. By monitoring changes to these files, we can quickly determine unauthorized changes. Another interesting thing to monitor is new binaries. They might indicate a normal installation, or the addition of a malicious file.
- Use IMA/EVM for high sensitive systems
- Implement file integrity monitoring
Logging is the process of storing events in a certain way, for later access. It is used for debugging purposes, monitoring, accounting and forensic research. Too often these log files are taken for granted and not protected. It are these same files which help you discover what happened during a period, when something bad happened. Ensuring the integrity of these files, is therefore valuable for future events.
To protect log files, ensure that the file permissions and ownership are correct. Only the related daemon should be able to write to the file, to avoid unauthorized file alterations. Second level of ensuring integrity is to make files append only, and when possible store them remotely. This provides another barrier when a break-in on a system occurred.