Simplifying Security: Choose the Right Toolkit, not Tool.

Simplifying Security

I applaud many of our customers for being smart. Not to say other people are not, but they have made a specific choice in the past based on an understanding. They understand that a single security solution to make your IT environment safe, simply does not exist. It is the combination of tools, or your toolkit, which does. For this same reason, a carpenter has a tool chest, not a single tool.

As a founder, I get to see the feature requests. Many of them, which sound great on paper, simply do not belong in our product. Why? We focus on auditing of Unix based environments. So extensive logging features are not part of our product (for that you might want to use Splunk or other tools).

These feature requests made me think about the following question: why do we want to have just one single solution for things?

Pros and Cons

Some benefits of one solution are immediately clear: good integration, usually cheaper, and less overhead. On the other hand, one solution is often also a compromise on specialization benefits. Another issue with having too much functionality into 1 tool is that it becomes harder to use. After all, more functions have to be implemented, making the user interface harder to use. Going back to the carpenter, he would have to handle a tool so big in size, it is impossible to use.

Making Security Simple

If you want to make security simple, you should start at the beginning. It is the place where you look at your threats to your business and operations. Second are the involved risks, from business to technical risks. If the threats and risks are clear, you can start with creating your toolkit. You select the right tools for your personal toolkit. Some companies might put additional focus on logging and event management, while others focus on malware.

The Unix Way

In the field of Unix administration, we apply the rule “do one thing, and do it really well”. It is for this particular reason why Unix based systems are stable. Each tool is doing one single thing. For unclear reasons, we don’t want to apply the same when it comes to security. Maybe because it is still seen as a necessary burden? In any case, there is a lesson to learn from this. Small and simple things, usually are a lot stronger. If you want to have a powerful tool to solve a problem, select the product which is specialized in that.

Building Toolkits

If you are building your toolkit, you might wonder where to start. After all, there are so many tools available, both commercially and open source. As an extension to the carpenter analogy, let’s go from there. If the carpenter wants to keep his toolkit up-to-date, he will determine what kind of work he did lately and what is there to come. Within the world of security we should do the same. Too often, we rush into making a product purchase while we don’t really know what we need.Better planning helps to create

Better planning helps to create budget and become more proactive to deal with known and unknown threats. For example, if you are a hosting company, you might not have to deal with malware currently. If you did your risk assessment properly, you will know there is a fairly high risk of websites being infected with spam scripts. So this is a great start for filling up your toolkit with tools.

Just filling your toolkit with similar products, is a recipe for disaster. Your toolkit should have a variety set of hammers, screwdrivers, and measuring tape. We need tools to measure, like one tool for intrusion detection. Another tool might be there to limit access, or prevent something from happening at all.


There is no “one size fits all” tool when it comes to security. Consider yourself the carpenter who needs to work on different projects, and select the appropriate toolkit for the job. If you are in the process of selecting a new solution, drop the “it needs to have all” and consider combining more tools. Create your own toolkit, to do your job easier, using the power of each single tool.

Happy hardening!

Enjoyed reading the article? Share your discovery with others:

DevOps vs Security: Can Docker make a difference?

One of the pioneers in the world DevOps, is the company Docker Inc. Known for its toolkit around Linux container technology, they propel the way this technology evolves and is promoted to the world. With great achievements and interest from the outside world, also comes a lot of pressure. Competing products are showing up, resulting in a battle for features, pricing and customers. Unfortunately for security professionals like us, the many security lessons from the past seems to be forgotten. We might be battling the same issues as before…


DevOps movement

In the last few years, the DevOps movement gained a lot of momentum. One of the reasons might be the need for companies to be more “agile”. This includes releasing software quicker and more often. All with the goal of providing higher quality and lower costs at the same time.

While the benefits of DevOps are great, the role of “being a DevOps” might be confusing for the people itself. Those who previously were sysadmins or developers, suddenly find themselves doing work from both worlds. Let’s be honest, it is close to impossible to be an expert in multiple areas, or keeping up with all new developments.

Do we have a problem?

Especially for auditors and security professionals it is hard to keep up with these new technologies. We simply do not have enough hours per week to extensively dive into each new technology. When technology is then also limited to one platform, you have to simply make choices and specialize in one area.

Even developers and admins who already used Docker, might be confused by all available parameters. Worst, they only seem to increase every new Docker release. It is great to see SELinux support, but didn’t we all turn that off on our host system as well? With the existing time pressure in our work, new features are usually skipped. This is especially true if they take a lot of time to test, deploy and monitor. We all know that usually security features are not in the category “simple and easy” to deploy, without extensive testing.

Docker and security

In the last few releases of Docker, the company showed that security is a subject you cannot simply skip. Some vulnerabilities were patched, and several new security features were introduced. Examples include allowing a limited set of capabilities and the usage of MAC frameworks. By looking at these new options, we can get a glimpse of what is already possible, and where the technology is still immature. Being a DevOps gets easier due to container technology, and at the same time more complicated as well.

Containers do not contain

The “containers do not contain” is an common heard phrase. The current issue with containers is that they do not fully isolate, yet. One of the main reasons is that one important namespace is missing, the one dealing with users and groups. For example gaining “root access” within the container, means you get similar privileges on the host system itself. From there it a small step to compromise the security of the whole machine.

Another example why containers are not fully isolated, is for example keyrings, storing crypto keys. This tooling can’t see the difference yet between UID 80 in one container from another user with the same ID. Due to these constraints, we should still treat containers similar to a normal host system. For example running services under the context of the root user was always considered bad practice. Which it still is, also when using containers.


Namespaces separate several internals of the Linux kernel, which allows it to create different “views” of what a system looks like. This way multiple environments can run on a single kernel, each with its own processes, users, network routing and mounts. It is like a virtual machine, except that containers are simply a single process. This reduces a lot of overhead and provides flexibility when packaging up software. Together with control groups, cgroups for short, the kernel can control processes. With cgroups the priority and resources can be controlled for example. Namespaces separate one big area into smaller ones, cgroups ensure that all areas behave.

Namespace complexity

Docker is actually waiting for the user namespaces to be finished, so it can leverage all its functions and get one step closer to full containment. The first few developments regarding user namespaces are finished and available. For example, the usage of subordinate users and groups is already possible. This functions helps the host system to map users (and groups) within each container, to different users on the host itself. For example user ID 1000 within the container, might be user ID 101000 on the host system. The functionality is definitely much more complex that it looks at first sight.

One restriction was the common 16 bits limit for user IDs, limiting it to only 65535. Maybe this restriction is even the easiest part to solve. A little bit more time goes into the adjusting of common userland and helper tools, to deal with the mapping of users. Examples include tools to create, modify or delete users (useradd, usermod, userdel), helper tools (newuidmap, newgidmap) and the usage of new configuration files like /etc/subuid and /etc/subgid. What looks like an easy extension in one file, turns out to affect a lot more files in the end.


Build, Ship and Run?


Most things in IT start in the building phase. In the case of Docker, you might want to consider a little bit more time in the phase before: preparation. Before just building things, you will benefit from a clear strategy. This starts with how you want to divide applications, and what makes a container actually a container. Right now the consensus seems to be a unit, which has one primary function (e.g. be a database server, or provide a web application). Whatever you choose, ensure that there is a definition in place within your organization. From there start building containers according to that strategy.


The building process is one of the most interesting parts. Here images gets build, which then will be used for running new containers. At this stage security awareness and implementation is all depending on the skillset of the builder. Unfortunately, developers usually have a lower urgency to do things the secure way, than most system administrators do. Where the developer has the focus “get it running”, the system administrator cares more about system stability.

The Dockerfile

Docker build files, usually with the name Dockerfile, are small scripts to guide the build process. They instruct the docker binary how to create an image, and what commands to execute. The first thing is defining the base image, from which the container will be build. Usually defining the maintainer is next, followed up by installing packages. If you will create, tune or analyze a Dockerfile, it is important to know these basic commands, to determine what the container is actually doing. While the commands might have very self-explanatory names, they have small subtleties in them. Just copy-paste an existing Dockerfile and adjust it, will not always give the results you are seeking.

Command Function
ADD Copy archives, downloads or data into the image
CMD Define default command to run (usually the service)
COPY Copy data into the image
ENV Define an environment variable
EXPOSE Makes a port available for incoming traffic to the container
FROM Define the base image, which contains a minimal operating system
MAINTAINER Maintainer of the image
RUN Execute a command or script
VOLUME Make directory available (e.g. for access, backup)
WORKDIR Change the current work directory


Best practices

Docker provides extensive documentation regarding the build process, including a best practices document [1]. After analyzing hundreds of build files (Dockerfile), we can conclude that many builders definitely do not follow these best practices. Issues are varying from skipping simple optimization steps when installing software components, up to using “chmod 777” on data directories. If you are using Docker within your organization, analyzing build files will definitely give an idea about the best practices applied within this area. Since we are talking about DevOps and automation, the open source auditing tool Lynis[2] helps you to check for some of the best practices in your Dockerfile.

Steering the ship

Even with lacking security awareness, or missing security features, not all hope is lost. Docker provides a few features:

  • SELinux/AppApparmor support – limit processes what resources they can access
  • Capabilities support – limit the maximum level a functions (or “roles”) a process can achieve within the container
  • Seccomp support – allow/disallow what system calls can be used by processes
  • docker exec – no more SSH in containers for just management

Additionally we can use iptables, to limit the network traffic streams even further. On the host system, you might apply technologies like GRSEC and PaX, followed by other generic system hardening practices.


When we look at the world of vessels and containers, it becomes clear that container technology is not very mature. When we look specifically at the security level, there is even more room for improvement. At least Docker gave both the technology and security awareness a boost, resulting in the first signs of a healthy ecosystem. The existing security features definitely look promising and worth investigating. Let’s hope this article is outdated in a few years. For now, wishing you a great and safe trip.



This article has been published in issue 45 of (IN)SECURE Magazine and reposted with permission.

Enjoyed reading the article? Share your discovery with others:

Tuning auditd: High Performance Linux Auditing

High Performance Linux Auditing

Tuning Linux auditd for high performance auditing

The Linux Audit framework is a powerful tool to audit system events. From running executables up to system calls, everything can be logged. However, all this audit logging comes at the price of performance. In this article we have a look how we can optimize our audit rules, and keep our Linux system running smoothly.

Good auditd performance will reduce stress on the Linux kernel and lower its impact. Before changing anything to your system, we suggest to benchmark your system performance before and after. This way you can see the benefits of your tuning efforts.

Strategy: Rule Ordering

Placing rules in the right order

Many software packages use “order based rule processing”. This means each rule is evaluated, until one matches. For the Linux audit daemon this principle applies as well.

So one of the biggest areas to tune, is the order of the rules. Events which occur the most should be at the top, the “exceptions” at the bottom.

If your Linux audit set-up is done alphabetically, you can be assured this configuration is not optimized for performance. Let’s continue tuning auditd in some other areas.

Strategy: Excluding Events

Determining what message types are used a lot

The challenge with logging events, is to ensure that you log all important events, while avoiding logging the unneeded ones.

Some administrators apply the “just log everything” rule. While it often makes sense, it is definitely not efficient and decreases the performance of the Linux kernel. This kind of logging will definitely decrease the processing time of auditd and have a negative impact the performance of the kernel.

To enhance the logging, we first need to determine what events often show up.

Most events sorted by executable

aureport -ts today -i -x –summary

Most events sorted by system call (syscall)

aureport -ts today -i -s –summary


This will reveal what executable or system call is flooding your audit logs. By defining “-ts today” we only see the recent events.

The output of aureport definitely helps reducing the amount of logging, by disabling some events. Of course you can do this also for events, files and other types. See the man page of aureport for more details.

Screenshot of aureport with summary of events which occurred today

Summary of aureport showing events which occurred today

Ignoring events

Now we know what type of files, events or other messages we have, we can ignore them. For that we have to make a rule, which matches and states the exclude of exit statement.

The exit statement is used together with syscalls, for others we use exclude.

Filter by message type

For example disabling all “CWD” (current working directory), we can use a rule like this:

-a exclude,always -F msgtype=CWD

As the first match wins, exclusions have to be placed at the top of the rule chain. As this is a filter based on a message type, we use exclude.

Filter by multiple rules

Another example is suppressing the messages logged by VMware tools. For that we combine multiple rules together, by providing multiple -F parameters. You are allowed up to 64 fields, but usually a few are enough. When using -F, each expression will be checked with a logical AND statement. That means all fields have to be true, to trigger the action of the audit rule set.

-a exit,never -F arch=b32 -S fork -F success=0 -F path=/usr/lib/vmware-tools -F subj_type=initrc_t -F exit=-2
-a exit,never -F arch=b64 -S fork -F success=0 -F path=/usr/lib/vmware-tools -F subj_type=initrc_t -F exit=-2

Note: some examples might have different results on older machines. Therefore always test each rule to determine if it works. Rules which don’t do anything are only negative for performance.

Strategy: Determine Buffering Needs

Tuning buffer needs for auditd

By default the auditctl can provide some statistics when using the -s (status) flag. It shows its status (enabled), any related flags, process ID and log related statistics (backlog, rate, lost).

# auditctl -s
enabled 1
flag 1
pid 430
rate_limit 0
backlog_limit 320
lost 0
backlog 0
enabled -1
flag 0
pid 0
rate_limit 0
backlog_limit 320
lost 0
backlog 0

Allowing bigger buffers means a higher demand on memory resources. Depending on your machine this might be a small sacrifice, to ensure that all events are logged.

To determine the best possible buffer size, monitor the backlog values. They should not exceed the backlog_limit option (in our case 320). Another useful statistic is the monitor the lost value, as it will tell you how many events could not be processed. On a normal system this value should equal or close to zero.

Strategy: Monitoring Directories

Use path instead of dir when monitoring a specific directory

There are two ways to monitor the contents of a directory: path or dir.

Depending on what your want to monitor, monitoring subdirectories might not be needed. In such case it is better to use the path option, as it monitors only that directory. It’s a small adjustment, which might save you a lot of unneeded audit logging.


Do you have other ideas for our readers? Share it in the comments below.

Enjoyed reading the article? Share your discovery with others:

« Older Entries