Linux Security Principle: Containment of Failure

Containment of Failure

Everyone who used Windows 95 or 98 in the past is familiar with the concept of failure. One crashing application was enough to bring the system to a halt. Fortunately, Linux systems have a strong foundation, including privilege separation and memory management. When things go wrong, the impact is reduced to a minimum. This is called containment.

Linux Memory Management

Memory is like your the storage capacity of your brain. Every bit should be stored properly, or otherwise you will do strange things. Linux systems have powerful memory management, to ensure that data is properly sorted and permissions are assigned. For example an ELF binary, the most common binary format on Linux, has different sections for executable code and data. Then on top of that, each section gets different permissions in memory. For example code could be marked as read-only, to prevent it being overwritten by itself or another process.

As you can imagine, memory management is an important area of the GNU/Linux kernel component. A single implementation mistake is the difference between a stable system, or one that crashes for no reason.

Privilege Separation

One of the primary reasons that Linux systems are stable is the clear separation of privileges. We already have seen it in action for Linux memory management, where different structures are separated. This goes much further on other levels of the system, including what kind of functions can be performed by executables (e.g. using Linux capabilities).

Build for Impact Reduction

When you are building systems, we can learn a valuable lesson from the containment features of Linux. Every system should be built in such a way, that when the inevitable crash occurs, the impact to our full environment is limited. This containment of failure can be achieved by using a clear separation in functions. If one function goes down, it should only have an impact to that function. Where possible indirect damage should be limited, or avoided.

Never Fail A Little Bit

Systems will fail. Linux systems, while stable from the foundation, can fail as well. The worst outcome is a system which provides its services only half. It is not down, but not really up either. When you design your web server cluster, ensure that the load is properly shared among each node. Complete it with the right amount of monitoring, so it never gets stuck in “half” operation. This could happen when it is overloaded, yet the load balancer thinks it has enough resources left. It is better to fail completely, than just a little bit.

Conclusion

Everyone wants a stable system. Stability is the sum of a lot of factors combined, like privilege separation, proper memory management, and containment. To achieve a stable operating system, and system, these factors all need be in balance and correctly implemented. In upcoming blog posts, we will have a look at the more technical aspects.

 


Enjoyed reading the article? Share your discovery with others:
twittergoogle_plusreddit

Missing Packages: Don’t Trust External Repositories!

Missing packages…

If you are in the business of system administration, you know the big dilemma when it comes to installing software: missing packages. Yes, a lot of packages are available in the repositories of your Linux distribution, but not the one you need. Or when it is, it is horribly outdated. So you reach out to external resources, like community maintained repositories, right?

With Lynis, we face this same issue. While most of the distributions have Lynis in the repository, it is often outdated. We could do packaging ourselves, and most likely will in the future. But for now, that task is taking too much time with the regular updates we provide. Packaging, testing, and checking is a delicate process, often better done by people who know that specific Linux distribution from the inside out.

Many software components are facing the same and other people step up to provide community maintained repositories. In this article, we have a look at the benefits, but also the serious risks involved.

The Trust Issue

One of the big problems with external resources is that you have to trust people. People you possibly don’t even know. Every single person is a new line of trust, adding up slightly more risk. That is totally fine, until you have too many “trust relations” going on. It is hard to keep up if everyone in the chain remains trustworthy. Or even worse, if someone in the chain goes bad and malicious activity occurs on purpose. In this case that might be an altered package, or a hacked software repository.

So in any case, you want to trust as less as possible. For those areas you want to trust, you want to have assurance that the people (or companies) involved are doing everything they can to minimize risks and maximize protection.

Why Not Use External Repositories

Depending on your environment, packages maintained by a third party might introduce a new level of risks. For example when your environment is totally build up with RHEL systems, chances are big you need sooner or later an external component. By adding the repository, you might lose support or even face unstable software. There is also a serious risk in inadequate support to keep up with security bulletins. Voluntary repositories often don’t have the resources like the Linux distributions themselves. The only exception of using an external repository might be for official vendor supported software. An example is that of Docker. They have their own build process and release schedule, so they don’t want to rely on all Linux distributions to keep up.

Great, What Then?

The best option is to build some software yourself, especially if you have the intention to roll it out in your whole Linux environment. This gives you the opportunity to decide what versions to use and quickly patch it when needed. By compiling and packaging the software packages, you feel also more responsible for introducing new software components. After all, a healthy barrier will be added, which will avoid you from just installing more and more external software components.

The Way Back

So you might think “great, but I already have those external packages in my environment”. In that case not all hope is lost. Even if you use repositories like DAG and repoforge, things can be improved step by step:

  • Make an inventory of all used repositories
  • Craft a list of “alien” packages
  • Determine exceptions
  • Remove unneeded packages
  • Build replacement packages
  • Limit access to repositories

So let’s go into more details on how to achieve these steps. The first action is getting an inventory of all used repositories on the systems. Make a shift between the native built-in repositories, and those externally hosted.

Next step is to search for all packages and determine to which repository they belong. Were they being part of the built-in repository, or one of the external ones?

Screenshot of package details for Debian and Ubuntu packages

Use package details to discover who maintains it

In this screenshot we can already use the Maintainer field to filter out the packages maintained by the Ubuntu Developers. All packages which don’t have this as maintainer could be interesting for further analysis.

Alien packages

When you have the list of alien packages, it is time to determine which ones are really needed, and the ones which are optional. Everything unneeded should be uninstalled. The remaining packages are for the self-packaging list. Depending on your needs, this list might actually be lower than initially thought. It is common to find the same packages being installed on many systems.

Packaging

Next step is building the packages yourself. The first time it might be a daunting task. The positive news is that usually the external repositories often provide you the source build files. This way you can reproduce what they have done and do it yourself. Use it to build the packages and start testing deployment.

Exterminate

Then finally when everything is done, ensure that external repositories become the past. Monitor your YUM/APT configuration files and block the addition of any new repositories. You might even want to filter them out in your proxy or firewall.

Patch!

Last but not least, keep your own packages up-to-date. Especially network services might be extra vulnerable for attacks from outside. Also implement a software patching plan, in case you didn’t have that yet. Security patches are released on a daily basis, so keep all packages up-to-date.

Conclusion

External packages are often used to overcome the “missing packages issue”. Your favorite repositories might not be hosting them, so external ones are being used. While this might sound as an easy way, it introduces the risk of unstable software, vulnerable or even malicious harmful software.

The best option is limit to official repositories from your Linux distributions and well supported external vendors. They have both the capacity and knowledge to supply packages, as they name is on the line. Don’t trust external resources too much and avoid them as much as possible.

And as usual, it is easy to introduce something, but getting rid of it might be close to impossible. We all those “it is just temporary” installations. Simply do not sacrifice the integrity of your software for convenience. Keep your IT environment healthy, instead of building it on all kind of loose ends and external dependencies.

Stay safe!

 

Enjoyed reading the article? Share your discovery with others:
twittergoogle_plusreddit

Security Defenses to Fortify your Linux Systems

How to Fortify your Linux Systems

Create a Linux security fortress; implementing security defenses using towers, bridges, and guards.

Still many companies have difficulties implementing basic security measures. Even after years of websites being defaced, and customer records stolen, the same mistakes are made over and over again. While this all might sound like an unsolvable situation, information security is getting attention from more people. If you are responsible for the system management of Linux systems, ignoring security is no longer an option.

The issue with security is that you can measure insecurity, yet not properly measure the level of security. This leads to a situation in which companies simply not knowing what to do, or when it is enough. Still by applying a few basic principles, we can fortify our systems and make our defenses more resistant against common attacks.

Great Wall of China

Great Wall of China

Risk Management

Security boils down to understanding risk. From management level, down to the system administrator, everyone is in control of some aspects of risk. We might choose to accept risks (do nothing), reduce them (implement measures), or move them to others (e.g. insurance). Finally, we can decide to skip risk, and not pursuit some action at all. These principles of risk management also apply to our Linux systems. It requires understanding of risks and threats, to allow us selecting the right measures and enhance our existing defenses.

In the world of IT, ignoring common threats like malware and exploiting software weaknesses is usually no longer an option. Knowing risks and threats is what makes us well informed, resulting in making better decisions and spending our precious time more wisely.

Linux and Security Risks

Like any operating system, Linux also has threats which might badly impact the confidentiality, integrity and availability of our data. The chance to find a trojan horse on the system is lower than on a Windows system, but the risk is still there. To counter threats to our precious Linux systems, we can very well compare them with a fortress. Like any good fortress, it needs to be designed, build and maintained properly. So let’s move on and let our Linux systems be equal to building a fortress.

Building the Fortress

To build a fortress, you will need strong towers. They act as a defensive measure and increase the strength of the overall structure. On top of that, they help with monitoring the environment. Consider the towers as your primary goals, the walls as normal ongoing business (deploying systems, monitoring, adding/removing users, etc).

A fortress does not only exist of walls and towers. There are guards to monitor, and bridges to make something possible (e.g. cross over).

Tower 1: System Hardening

The first tower is strongly related to system deployment. When installing Linux systems, go for system hardening at day 1. This can be achieved by only doing a “minimal installation”, to reduce the fingerprint of the system. It saves installation time, storage space and limits the amount of possible weaknesses.

System hardening is not something you just do at installation time. There is the post-installation phase, in which you start enabling new services, like deploying your favorite monitor tool. Keep your post-installation tidy and clean.

Guards (monitoring):

  • Use automation tools like Ansible, Cfengine, Chef and Puppet

Bridges (enhance):

  • Automate your (post-)installation process
  • Minimal installations
  • Remove unneeded components

Tower 2: Software Patching

The second tower focuses on software components. After installation, software packages need to be maintained. Software is like the bricks in the walls. If you don’t maintain them, they crack open and introduce additional weaknesses.

Unfortunately still many companies fail to properly keep software up-to-date. Administrators are scared to implement patches, due to the chance of things end up broken. Good testing helps with reducing this risk, while keeping the fortress stable.

Guards:

  • Software version monitoring
  • Vulnerability scanning

Bridges:

  • Software patching solution
  • Build/test platform for (automatic) security patching

Tower 3: Integrity Checking

Next tower consists of performing integrity checking. Like a fortress, we should ensure that unexpected parts are quickly discovered. In this case, it could be an unknown guard among our own troops, or malfunctioning chains to open and close the central bridge. Comparing this to our Linux system, a guard could be a process or binary on disk. Do some of them look strange or are they replaced with different files? It might be the work of a digital intruder. Similar to the bridge, common processes which malfunction and crash might be showing the signs of bad system integrity.

Guards:

  • Implement file integrity monitoring with tools like AIDE
  • Check for malware (ClamAV, OSSEC, rkhunter)

Bridges:

  • Keep software packages up-to-date
  • Perform sometimes a system reboot
  • Don’t use external components if not really needed for proper functioning

Tower 4: System Auditing

Like guards patrolling the fortress, and scouts doing field work, we should also check our systems on a regular basis. Consider it health checks, to ensure our measures are still working. For a fortress it could be lifting the bridge and inspecting the chains. Or checking the food supply, for times when resources will be scarce. In the world of Linux systems, we have to check our software configurations. Check if main processes are still running as expected, log files are b

Guards:

  • Review log files
  • Check software configurations
  • Have an external auditor or colleague do an analysis

Bridges:

  • Implement continuous auditing and monitoring tools (scripts, Lynis)
  • Implement system hardening
  • Centralized syslog server

Conclusion

Linux systems can be fortified to reduce the most common attacks. Internal and external attackers can quickly weaken your defenses. From patch management to regular audits, integrity checking, and system hardening, they are all needed to form the pillars of a healthy construction. Your Linux system is not very different from the fortress of the medieval times.

Good luck with building your digital fortress and keep your security defenses strong!


Enjoyed reading the article? Share your discovery with others:
twittergoogle_plusreddit

« Older Entries