« Back to Vulnerabilities

Forget Linux Vulnerability Scanning: Get Better Defenses

Every month or so, I get a few questions about the vulnerability capabilities Lynis has to offer. It made me think about this subject and I realized something: Many security professionals are still focusing too much on vulnerabilities. They want to know their security gaps, so they can know where they stand. While this isn’t a bad approach, there might be a better solution.

The solution I will discuss today is to focus on (permanent) processes, instead of vulnerability scanning. The goal is to reduce weaknesses quicker, and more often. Processes like software patch management, regular audits and security monitoring. So forget about vulnerability scanning and let’s proceed to the next level of security!

Vulnerability Scanning and Linux Systems

When performing vulnerability scanning on Linux, it is common to find tools like Nessus and OpenVAS. These tools scan the network and optionally has the credentials to log onto systems and collect more information. The result is very binary: it discovered vulnerabilities, or it did not. We all know that seeing no single vulnerability is a rare exception.

What Makes a System Weak?

Vulnerabilities on a typical Linux system can be an outdated software package or a weak configuration. Buffer overflows are a common example of attacks to abuse vulnerabilities in the first category. The second category contains examples of two categories: lack of knowledge, or weakening on purpose. Let me clarify this one, as it is a root cause for many issues caused in past break-ins.

Too often the configuration of a software package is weakened, by the system administrator (or developer!) making adjustments. Some setting may prevent you from getting your new application to work, so you turn it off. At the same time, this option was there to prevent specific attacks, with the result your system is now in a weakened state. Other examples for Linux include changing file permissions (chmod 777), turning of protection mechanisms (iptables, SELinux, AppArmor etc), or simply forgetting to remove default example files.

Then there is the part of the knowledge gap. Still many system administrators don’t know how to configure or use the Linux audit framework. This is a shame, as it provides a very powerful way to do security monitoring. Vulnerability scanning is great, Linux security monitoring is even better.

Vulnerability Scanning is Negative

Vulnerability scans provide a lot of findings. While this might look innocent at first sight, it also means that we always look at the negative side of security: vulnerabilities.

Vulnerabilities are by definition bad and a fact of life for system administrators. Unfortunately you can never win the game, as continuously new vulnerabilities are found. Keeping up with all the details and solving them, takes a lot of time. This is exactly most don’t have, as there are internal projects to finish and new systems to be deployed.

Wouldn’t it be better if we took a more positive approach when it comes to vulnerabilities? For example, we could look at the things we actually can do to increase the security defenses. These defenses could be the installation of a firewall, applying software patches and performing system hardening. In other words, we try to achieve continuous improvement. We make this part of our daily routine, until the leftovers are considered to be acceptable risks. It will be a long battle, but it is definitely possible to achieve a state of control.

Vulnerability management applied incorrectly

Too often we see vulnerability scanning being done on environments, to show the need for more security. Similar to penetration testing of Linux systems. Both are actually steps which should be followed after the basics are properly in place. These basics contain of:

  • System hardening
  • Software patching
  • Configuration file management
  • Security monitoring
  • Security audits
  • Compliance testing

Unfortunately, most companies don’t have these basic processes under control, yet use vulnerability scans and pentests to determine how well they are doing. The result is always bad…

Continuous Improvement

The Japanese are considered to be the most skilled people in the world when it comes to quality. A great example is the story of Jiro Dreams of SushiExternal link . In this story, Japanese sushi chef Jiro continues to keep improving his sushi servings. He isn’t the college graduate, but an 85-year old man who continues to keep learning. His believe that you should continuously strive to further improve. Perfection is something he doesn’t take for granted.

To know how to improve, you should know quality. We often take this word for granted. But what does quality actually mean? The core principle of quality is how well you can repeat something. If you create toothbrushes, you want them all to be like the original sample you created. So they should have the same strength, the same amount of brush hairs and the same color. If you can create an almost perfect copy of the original, quality is good.

This same practice of quality improvement is something we should consider more often in our field of expertise. Instead of focusing on the bad, we should look at the things that can be improved. We should not longer accept things as they are, but make informed decisions on what we can do to make small improvements and increase “IT quality”.

We can compare IT environments with a fabric: A lot of machinery, processes and humans involved. Instead of creating a physical product, we want to achieve some piece of output (e.g. keep the SaaS environment available for customers). We should learn from downtime, so we can decrease the chance and the impact of every negative event which occurs. At the same time we search proactively for improvements. This will result in quicker and more stable machinery, processes which are easier to understand and humans which know their roles and duties.

If we compare vulnerability scanning on Linux with our fabric example, we see that is might be a pointless exercise. It is like knowing that one machine is leaking oil, yet taking the old for granted. It is the actual action required to fix it, to improve: in this case changing a leaking bolt. So while vulnerability scanning is not a bad thing in itself, the focus should be on the routine checks. These checks could be done with regular audits. Software patch management is like regular maintenance. This way we can prevent a machine from leaking oil in the first place. When it still happens, we know that we can quickly discover it, to keep the impact to a minimum.

Automation on Linux

Linux systems have a lot of great tools onboard to test security. These tools help with the continuous improvement process, by improving quality. Automation is a great tool to help with increasing quality.

With automation tools like Ansible, Chef and Puppet it is even easier than before to deploy small improvements. There is no need to download a hardening guide and do it all at once. Security should be part of your process and so every bit of hardening should be tested first, then deployed. Just pushing out a “full” policy, will sooner or later backfire.

Avoiding backfire

When pushing too many changes into production environments, we can actually harm the business. Still there are system administrators who rely too much on existing benchmarks and hardening guides, without proper testing. It might look innocent to change some kernel settings or software configurations. That is, until vague issues show up in the weeks after. Then it usually doesn’t take long that “security” got blamed and previous activities disabled. Good work backfires and results in even more work and a bad stance for security defenses. We can avoid this by taking the proper measures.

Splitting work into small steps

Instead of pushing big policies at once, the first thing to do, is splitting up work. Work can be done by category (e.g. all kernel settings), area (e.g. networking) or by priority and impact. The best strategy is dependent on your current stage and more interestingly on the level of monitoring. To avoid backfire, we should be able to implement changes and know their impact. For example increasing buffers to counter a denial-of-service attack, might increase memory usage. If the system has enough spare memory, this isn’t an issue. For a system being already challenged with the level of memory usage, this small adjustment might actually cause the denial-of-service!

Alternatives for Linux vulnerability scanning

Besides the focus on system hardening, we can focus on auditing and compliance. In this case, we measure the amount of systems which comply with the defined baseline. Also when using security measures, focus again on the positive. Sharing failure rates with your management will not positively impact your efforts. Instead, set a minimum baseline and a threshold to comply with. An example could the amount of systems which are part of the software patch management solution, with a minimum level of 95%. When 98% of your systems are subscribed to the central server, you know there is still some room for improvement, but also can show a positive trend.

Linux security compliance

When using baselines and hardening guides for your Linux environment, define what the minimum level should be. This could be a number or a percentage. This helps in defining when systems are compliant, or non-compliant. Even better is when linking it against your internal security policies and aligning security metrics.

A few notes for security metrics. Make sure they can be measured properly. Ensure that the meaning behind a number or percentage is clear for your organization, to prevent misinterpretation. Like goals, the metrics should be achievable and realistic.

Example security metrics

  • Percentage of Linux systems being patched
  • Percentage of systems audited last week
  • Number of configuration files being managed by configuration management tools

Conclusion

Vulnerability management is actually a great tool to know your weaknesses. However, it does act from a negative standpoint, making it harder to sell the required action. The focus should be more positive, like well-defined processes (patching, hardening, auditing). Measuring and monitoring are key, to know where we stand and what next step to take. Again, Linux vulnerability management points out pain on your system, but shows data when it is already too late. Proactive improvements and regular maintenance are a better way to keep your Linux systems secure. Vulnerability management and penetration testing, should only be as a last level of validation.

So with these insights, forget about vulnerability scanning and focus first on the positive things that really matter.

Related articles

Like to learn more? Here is a list of articles within the same category or having similar tags.

Feedback

Small picture of Michael Boelen

This article has been written by our Linux security expert Michael Boelen. With focus on creating high-quality articles and relevant examples, he wants to improve the field of Linux security. No more web full of copy-pasted blog posts.

Discovered outdated information or have a question? Share your thoughts. Thanks for your contribution!

Mastodon icon