Linux Audit Framework 101 – Basic Rules for Configuration

Linux Audit Framework 101

Basic Rules for Configuration

Starting with Linux auditing can be overwhelming. Fortunately there is a great tool available to tell the Linux kernel to watch some events and log them for us. To give you a quick start to use the Linux Audit Framework, we have collected some basic rules for configuring the audit daemon and its rules.

Main Configuration

By default the configuration values in /etc/audit/audit.conf are suitable for most systems. If you know your system is very low or very high (e.g. mainframe) on resources, then you might want to adjust some file sizes or buffers.

Auditing Does Not Equal Security

The auditing framework simply monitors and logs events to an auditing log. Keep in mind running an auditing daemon does not increase security in itself. It does however create an audit trail, helping with detection (e.g. security intrusion).

Rules

The Audit daemon uses rules to monitor for specific items and create a related event log. Each rule can be provided to the daemon by using the configuration file /etc/audit/audit.rules or with the command line utility auditctl. When using the configuration file, keep in mind that just adding new rules is not enough to activate them. Reread of file is needed.

The rules file can be read with -R. It should be owned by the root user, or a “Error – /tmp/test isn’t owned by root” will show up.

The 3 Types

The auditing framework and daemon in particular, knows 3 types of rules:

  • Basic Auditing Settings
  • File/Directory Watches
  • Syscall Monitoring

First Match Wins

When a rule matches, the audit daemon stops evaluating if other rules need to be checked as well. So make sure to put things in the right order of processing, or some rules will never match.

Getting the Right Details

When using a watch on a directory, less information will be logged in comparison with specific file watches. So when in need of all details, monitor files instead.

File Needs to Exist

When using templates or adding new file watches, keep in mind that the files or directories to monitor, need to exist on disk.

System Architecture

Some rules might not work between systems, if their architecture is different. Where possible, specify the architecture to ensure you are monitoring the right system call.

Use Keys

Screenshot of ausearch with key

File access monitoring with Linux audit framework

To simplify searching and categorizing events, use keys. Multiple keys can be used on a rule, which help in grouping events while still having a separation in place as well.

The ausearch utility can be told to search by key with the -k parameter, followed by the actual key. This way searching for specific events becomes much easier.

Keys also help in grouping the events. This way you can use it for both auditing purposes and use some specific key combinations for goals like intrusion detection. It minimizes the amount of events, which you might want to put into a SIEM (Security Incident and Event Management) solution.

Prepare Before Auditing

Carefully select which files or events you want to monitor. With more auditing, the load will increase. Additionally, the audit log will increase as well. Too much logging and you will be overwhelmed. The “log everything” approach is definitely not the right mindset when using Linux auditing capabilities.

Check the examples

By default the audit package is containing great example files: capp.rules, lspp.rules,  nispom.rules and stig.rules.

Conclusion

With these rules you should be able to get the Linux audit framework up and running. The audit framework is powerful for debugging and troubleshooting issues on your system. Additionally it is of great help in detecting unauthorized events or system intrusion. If you like this subject, we encourage you to check out the other blog posts we have on this subject.

 

Why Linux Security Hardening Scripts Might Backfire

Why Linux Security Hardening Scripts Might Backfire

System administrators and engineers love to automate things. In the quest to get everything replaced by a script, automated hardening of systems is often requested. Unfortunately this automation might later backfire, resulting in a damaged trust in system hardening.

Why System Hardening?

The act of increasing system defenses is a good practice. It helps protecting your valuable data, so it can only be used by authorized people. System hardening itself consists of minimizing services and removing unneeded ones. This also applies to the access to the system, by reducing the amount of users, network access and protocols. Last but not least, changing software configurations to include the encryption of data and add additional authentication layers.

Hardening Scripts

More and more hardening solutions pop up, which promise to simplify hardening. Sure, system hardening is good, so is automation. But there is no “one fits all” solution when it comes to system hardening. Each system is different and needs a different level of protection. Your personal notebook might actually get a bad performance while browsing, if some network settings are adjusted by a Linux hardening script.

Alternatives

Normally I wouldn’t mind to name a few alternatives to our security auditing tool Lynis. In this case I feel strongly that promoting hardening scripts will actually weaken your security. You might end up taking a shortcut and end up with a false sense of security. Or worse..

Security risks

Some hardening scripts even download external files which they don’t control themselves. As hardening requires root permissions, this is definitely a serious risk. Automating your security controls is fine, but ensure you have 100% control over what is being automated. Another thing is properly testing, which might be hard if you don’t know what the tool is doing.

The Alternative = Auditing + Automation

Instead of just automatically hardening Linux systems with a script, use a combination of auditing together with a configuration management tool like Puppet. This way it is easy to detect what might be improved, while at the same time apply automation.

Tailored security

Sure, you might think that we would always advise to use an auditing tool, as we created one. But actually, it is free and open source. We honestly believe that measuring security and then acting on it appropriately, is the better way to deal with information security. Just running a hardening tool will definitely not give you the security level tailored to your needs, but it might give a false feeling of security.

Continuous security monitoring

Lynis (Linux/Unix auditing tool) screenshot

Screenshot of a Unix security audit performed with Lynis.

When using the combination of auditing and automation, divide systems by category, customer, role or any custom attribute. Then give them the right security policy it deserves and finally measure again with the auditing tool.

This way of working is also often referred to as the PDCA cycle (plan, do, check, act), providing continuous auditing and monitoring.

By using the right combination of testing, researching, applying and testing again, you will enforce your security defenses more appropriately.

Know Your Hardening

Last but not least, we didn’t go into the importance of knowing what you harden and why. For example changing kernel settings, or installing a firewall, might need specific knowledge. What is the point of applying hardening when some settings are not even applicable? Or adding firewall rules, while the firewall itself is not even running?

Each security control requires some knowledge about the subject. That’s why we provided our tool, to first detect what might be improved, secondly providing the related background information. Then your expertise of your environment comes into play, where you can determine what controls are appropriate. A ready-to-use Linux hardening script will never beat that.

Happy hardening!

Docker Security: Best Practices for your Vessel and Containers

Docker Security

Everything you need to know about Docker security.Docker security

 

Introduction into Docker

Operating systems like CoreOS use Docker to run applications on top of their lightweight platform. Docker in its turn, provides utilities around technologies like Linux container technology (e.g. LXC, systemd-nspawn, libvirt). Previously Docker could be described as the “automated LXC”, now it’s actually even more powerful. What it definitely is, is simplifying and enhancing the possibilities of container technology.

Continuous delivery

Rolling out containers is quick and very easy. It helps companies to improve development work flows. Developers can perform testing and deploy their applications much easier than before. Additionally, the use of Docker enhances the process from development, up to running software in production. This is achieved by using smaller units, which are easier to create, monitor and secure.

Supporting multiple technologies

Docker uses the possibilities of the hypervisor management tooling like libvirt and systemd-nspawn. It has an ongoing development and supports more and more features, to simplify the management of containers. With each of the component getting more stable, the base of Docker reached a level of stability, use it in production.

Enhancing security

With the right measures, Docker will also enhance security. For example due to running (smaller) individual units, controlling them is easier. One benefit of small containers is providing application owners and administrators a better insight what software, protocols and network flows are needed for individual services.

IT architects and security professionals will definitely benefit from container technology as well. Architects gain fine grained building blocks, to define new services and enhance exiting ones. Security professionals benefit from a better segmentation, and minimizing the permissions needed in each individual container.

 

Docker Security and Risks

Software packages can solve existing security threats, or actually introduce new risks. This is also the case when using Docker. While it can help in reducing risks by using compartmentalization, the implementation might have its flaws.

Risks

One common threat of new services is a low(er) stability, which forms a risk to the availability of a service. Another one is information disclosure, as the service might be lacking appropriate controls. Usually new technologies have a need of adding new features quickly, which might result in sloppy programming. Often this will result in software vulnerabilities, including ones which are security related.

Unfortunately Docker already had its share of security vulnerabilities, but they took a more active stance to improve the security of their products.

Methods and best practices

To reduce the risks when using fairly new technologies, we will have a look at the methods available to Docker. In particular, we look how Docker can increase security. After that, we provide some best practices when dealing with Docker containers.

Maturity level
Container management is a fairly new technology, which leaves many professionals with a knowledge gap in this area. Additionally, not many people are capable (yet) of making a proper security assessment. For example, auditors might not be able to ask the right questions regarding the implementation of containers.

This mature level risk also includes lacking technical auditing of containers, or difficulties in maintaining proper and up-to-date documentation. After all, the flexibility containers provide also means containers can run more easily on a different system, at any given time. This might need another level of documentation, to reflect the possibilities of each individual unit.

Containers do not (fully) contain

While containers are used to compartmentalize and limit resources, they actually don’t fully contain. For example a process running in the container with UID 1000, will have the privileges of UID 1000 on the underlying host as well. Along these lines, a process running as root (UID 0) in a container has root-level privileges on the underlying host when interacting with the kernel.

This risk will be soon be mitigated, when user namespaces are fully being implemented. The first step is already made in the form of subordinate user IDs (subuid). This helps mapping existing user IDs on the host system, into different user IDs within each container.

 

Security Benefits of Docker

Segregation of applications

Normally applications all run on the same host system. By using container technology we can segregate them, making it easier to determining traffic flows.

Flexible attitude

With containers being smaller individual units, they become dynamic, or flexible. The work flow to maintain them is more flexible as well. Great for security patching, testing and releasing the updated containers into production.

Focus on automation

Docker has a clear focus on automation. They have supporting products like Docker Machine, Swarm (clustering) and Compose, to simplify management of many containers.

Limiting information disclosure

Containers can limited resources assigned. This helps us in limiting the amount of information available to the system (and an evil attacker). Each containers gets the following components:

  • Network stack
  • Process space
  • File system instances

Limiting resources are achieved by using namespaces. Namespaces are like a “view”, which only shows a subset of all resources on the system. This provides a form of isolation: processes running within a container cannot see, or affect, processes in other containers, or the host system itself.

 

Protection Methods

By using Docker properly, some of its defenses can be used. Unfortunately the tooling does not actively help yet to leverage all possibilities. You are the one to properly configure, use and update Docker. Our hope is that Docker in the future will be more strict with some options, or at least advise the user to some extent.

Limited capabilities

Linux has support for “capabilities”, which can be seen as roles. A role could be opening a network socket, to craft a packet and put it onto the wire. Normally these kind of roles are only available to the root user. By splitting them in the form of capabilities, they can also be assigned to individual processes as well.  This way a piece of software can still open up a socket (with “root permissions”), while not being able to load a new kernel driver. For more details about capabilities, see our previous blog post Linux Capabilities 101.

Containers will run with a limited capability set. So even if someone breaks into the container, the host system is to some extent protected.

Examples:

  • Mounting operations
  • Access to raw sockets (prevent opening privileged ports, spoofing)
  • Some file system operations (mkdev, chown, chattrs)
  • Loading kernel modules

The configuration and usage of capabilities will be covered later. For now it is good to know that Docker by default drops a list of capabilities:

  • CAP_AUDIT_WRITE = Audit log write access
  • CAP_AUDIT_CONTROL = Configure Linux Audit subsystem
  • CAP_MAC_OVERRIDE = Override kernel MAC policy
  • CAP_MAC_ADMIN = Configure kernel MAC policy
  • CAP_NET_ADMIN = Configure networking
  • CAP_SETPCAP = Process capabilities
  • CAP_SYS_MODULE = Insert and remove kernel modules
  • CAP_SYS_NICE = Priority of processes
  • CAP_SYS_PACCT = Process accounting
  • CAP_SYS_RAWIO = Modify kernel memory
  • CAP_SYS_RESOURCE = Resource Limits
  • CAP_SYS_TIME = System clock alteration
  • CAP_SYS_TTY_CONFIG = Configure tty devices
  • CAP_SYSLOG = Kernel syslogging (printk)
  • CAP_SYS_ADMIN = All others

Usage of seccomp

Secure Computing, or seccomp, helps with the creation of sandboxes. It does so by defining what system calls should be blocked. The latest version of seccomp provides this syscall filtering by using the Berkeley Packet Filter (BPF), previously used for filtering network traffic.

Containers currently have the following syscalls disabled (since LXC 1.0.5):

  • kexec_load
  • open_by_handle_at
  • init_module
  • finit_module
  • delete_module
Screenshot of which system calls are blocked by default for LXC

System calls blocked by default in Linux containers

When any of the blocked syscalls is made, the kernel will send a SIGKILL signal to stop the related process.

Digital Signature Verification

Starting with Docker version 1.3.0 all images are verified after downloading. This is a great step in enhancing the level of trust for downloads. This is why we have done this for our auditing tool Lynis as well. Like our website, Docker is also providing their website HTTPS-only. Another level of trust and ensuring you are are the right place, downloading the tools from Docker.

 

Current issues with Docker

Root permissions

Right now there is a small issue left with Docker, which is the requirement of running the daemon with root permissions. Docker is aware of it and plans to define well-audited sub-processes, which do not longer require root permissions. Additionally, each sub-process will run with a very limited scope, increasing the security level of each component and enhancing stability.

Lack of full User namespace implementation

Currently there is still no full user namespace implementation. Something which is out of the control of Docker. When the LXC userland tools are evolved and include the support, Docker can leverage the possibilities. The first actions like user mapping is available, so full support is expected soon.

User 0 in container = User 0 on host
One of the risks due to the missing User namespaces, is that the mapping of users from the host to the containers is still an one-to-one mapping. Previously user 0 in the container was equal to 0 on the host. In other words, if you container is compromised, it doesn’t take much to compromise the full host. Fortunately this is work in progress. LXC already supports a mapping option, to map user ID 0 in the container to another (high) ID on the host.

Default allow all

By default all IP traffic is allowed between containers. This means they can ping each other, but also send other forms of traffic. It would have been better if Docker applied a “deny all by default” principle. This forces the maintainer of the container to think about what kind of traffic is needed between individual containers.

Fortunately traffic can be filtered and is absolute advised for systems in production.

 

Best Practices

With all these risks and possibilities, lets extract some of the lessons. These best practices help you create more safe services and enhance the security of existing containers.

Do not run software as root

This tip might sound to simple, but still many developers run their software as the root user. Containers still can not contain properly, which might result in a full host compromise if a container is compromised. Therefore, run your software packages like you would run them on a normal host.

Use Docker in combination with AppArmor/SELinux/TOMOYO

Ubuntu comes with ready-to-use AppArmor templates for LXC. It is always a good thing to know what your software does. This includes knowing what paths and permissions your software components need to function properly. With this information each piece can be restricted to the bare minimum needed, preventing permission escalation and unauthorized information disclosure (or worse).

To achieve the right policies, make sure to monitoring your applications from the start, including the related framework you are using. Each of them provides the means to monitor, so use them.

Within the container configuration the related AppArmor profile can be defined with lxc.aa_profile.

Use seccomp to limit syscalls

Support for seccomp is available to (at least) CentOS, Debian, Fedora, Gentoo, Oracle, Plamo and Ubuntu. You can use seccomp by altering the container configuration and define the seccomp rule set to be used:

lxc.seccomp = /usr/share/lxc/config/common.seccomp

For Docker this functionality can be activated by using the –lxc-conf parameter to docker run.

LXC configuration option: lxc.seccomp

Limit traffic with iptables

By default all containers use the docker0 interface as a bridge. Like on a normal host you can limit traffic, to block unauthorized traffic streams.

For full details we suggest to read the advanced networking blog post at Docker.

GRSEC and PaX

When possible, use a hardened Linux kernel, with kernel patches. Grsecurity and PaX are two examples which help in hardening the host system.

Using user mappings

To counter the issue that user 0 in a particular container equals root on the host system, LXC allows you to remap user and group IDs. The configuration file entry would look something like this:

lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

This maps the first 65536 user and group IDs within the container to 100000-165536 on the host. The related files on the host are /etc/subuid and /etc/subgid. This mapping technique is named subordinate IDs, hence the “sub” prefixes.

For Docker this means adding it as a -lxc-conf parameter to docker run:

docker run -lxc-conf=”lxc.id_map = u 0 100000 65536″ -lxc-conf=”lxc.id_map = g 0 100000 65536″

Use –cap-drop and –cap-add

(since Docker 1.2.0)

With the earlier described Linux capabilities we can tell Docker what specific roles should be given to a container. Actually, we can use both the add and drop option together. By first allowing all, then dropping some capabilities, we can limit the capabilities. The short version is just dropping permissions. The alternative is just adding the related capabilities, which more resembles the “deny all” principle.

docker run –cap-drop=CHOWN
docker run –cap-add=ALL –cap-drop=MKNOD

A (fairly) safe list of capabilities to drop are:

  • audit_control
  • audit_write
  • mac_admin
  • mac_override
  • mknod
  • setfcap setpcap
  • sys_admin
  • sys_boot
  • sys_module
  • sys_nice
  • sys_pacct
  • sys_rawio
  • sys_resource
  • sys_time
  • sys_tty_config

See the capabilities(7) man page for all details about these capabilities.

LXC configuration options: lxc.cap.drop and lxc.cap.keep

Do not run SSH in containers

Use “docker exec -it mycontainer bash” instead to manage your containers.

Do not run –privileged on containers

(since Docker 1.3.0)

For containers which already have SELinux/AppArmor support, use –security-opt instead. This gives it the appropriate security profile, instead of giving away too much permissions within the container.

docker run –security-opt label:type:svirt_apache -i -t centos \ bash

Related options for SELinux:

  • –security-opt=”label:user:USER” (set label user)
  • –security-opt=”label:role:ROLE” (set label role)
  • –security-opt=”label:type:TYPE” (set label type)
  • –security-opt=”label:level:LEVEL” (set label level)
  • –security-opt=”label:disable” (disable label confinement completely)

Options for AppArmor:

  • –secutity-opt=”apparmor:PROFILE” (set AppArmor profile)

For more options have a look at the Docker Run Reference.

Upgrade your Docker version on a regular basis

Most software packages have bugs, small programming errors. With Docker also being under heavy development, bugs are solved and new features added. Regular updating and making Docker part of your software patch management process, is advised.

Secure Docker client connections

Set the DOCKER_HOST and DOCKER_TLS_VERIFY variable, to use TLS for connecting to Docker instances. See https://docs.docker.com/articles/https/ for detailed instructions on setting this up.

 

 

Vulnerabilities in Docker

Over the years vulnerabilities will show up. Some of the Docker security messages are collected for archival purposes:

Docker 1.3.3 fixes: https://groups.google.com/forum/#!msg/docker-user/nFAz-B-n4Bw/0wr3wvLsnUwJ
Docker 1.3.2 fixes: https://groups.google.com/forum/#!topic/docker-user/IrjXTHA6jJc
Docker 1.3.1 fixes:  CVE-2014-5277 and CVE-2014-3566 https://groups.google.com/forum/#!topic/docker-user/oYm0i3xShJU

 

History

July 2008 – Kernel namespaces introduced

March 2013 – Initial release of Docker

October 2014 – Docker release 1.3

January 2015 – First release of this article

 

Citing Sources

Used references and sources, include the following websites:

 

Changes

Found something outdated in this article? Add it to the comments. This article is kept up-to-date on a regular basis, together with the developments of Docker itself.

« Older Entries