DevOps vs Security: Can Docker make a difference?
One of the pioneers in the world DevOps, is the company Docker Inc. Known for its toolkit around Linux container technology, they propel the way this technology evolves and is promoted to the world. With great achievements and interest from the outside world, also comes a lot of pressure. Competing products are showing up, resulting in a battle for features, pricing and customers. Unfortunately for security professionals like us, the many security lessons from the past seems to be forgotten. We might be battling the same issues as before…
DevOps movement
In the last few years, the DevOps movement gained a lot of momentum. One of the reasons might be the need for companies to be more “agile”. This includes releasing software quicker and more often. All with the goal of providing higher quality and lower costs at the same time.
While the benefits of DevOps are great, the role of “being a DevOps” might be confusing for the people itself. Those who previously were sysadmins or developers, suddenly find themselves doing work from both worlds. Let’s be honest, it is close to impossible to be an expert in multiple areas, or keeping up with all new developments.
Do we have a problem?
Especially for auditors and security professionals it is hard to keep up with these new technologies. We simply do not have enough hours per week to extensively dive into each new technology. When technology is then also limited to one platform, you have to simply make choices and specialize in one area.
Even developers and admins who already used Docker, might be confused by all available parameters. Worst, they only seem to increase every new Docker release. It is great to see SELinux support, but didn’t we all turn that off on our host system as well? With the existing time pressure in our work, new features are usually skipped. This is especially true if they take a lot of time to test, deploy and monitor. We all know that usually security features are not in the category “simple and easy” to deploy, without extensive testing.
Docker and security
In the last few releases of Docker, the company showed that security is a subject you cannot simply skip. Some vulnerabilities were patched, and several new security features were introduced. Examples include allowing a limited set of capabilities and the usage of MAC frameworks. By looking at these new options, we can get a glimpse of what is already possible, and where the technology is still immature. Being a DevOps gets easier due to container technology, and at the same time more complicated as well.
Containers do not contain
The “containers do not contain” is an common heard phrase. The current issue with containers is that they do not fully isolate, yet. One of the main reasons is that one important namespace is missing, the one dealing with users and groups. For example gaining “root access” within the container, means you get similar privileges on the host system itself. From there it a small step to compromise the security of the whole machine.
Another example why containers are not fully isolated, is for example keyrings, storing crypto keys. This tooling can’t see the difference yet between UID 80 in one container from another user with the same ID. Due to these constraints, we should still treat containers similar to a normal host system. For example running services under the context of the root user was always considered bad practice. Which it still is, also when using containers.
Namespaces
Namespaces separate several internals of the Linux kernel, which allows it to create different “views” of what a system looks like. This way multiple environments can run on a single kernel, each with its own processes, users, network routing and mounts. It is like a virtual machine, except that containers are simply a single process. This reduces a lot of overhead and provides flexibility when packaging up software. Together with control groups, cgroups for short, the kernel can control processes. With cgroups the priority and resources can be controlled for example. Namespaces separate one big area into smaller ones, cgroups ensure that all areas behave.
Namespace complexity
Docker is actually waiting for the user namespaces to be finished, so it can leverage all its functions and get one step closer to full containment. The first few developments regarding user namespaces are finished and available. For example, the usage of subordinate users and groups is already possible. This functions helps the host system to map users (and groups) within each container, to different users on the host itself. For example user ID 1000 within the container, might be user ID 101000 on the host system. The functionality is definitely much more complex that it looks at first sight.
One restriction was the common 16 bits limit for user IDs, limiting it to only 65535. Maybe this restriction is even the easiest part to solve. A little bit more time goes into the adjusting of common userland and helper tools, to deal with the mapping of users. Examples include tools to create, modify or delete users (useradd, usermod, userdel), helper tools (newuidmap, newgidmap) and the usage of new configuration files like /etc/subuid and /etc/subgid. What looks like an easy extension in one file, turns out to affect a lot more files in the end.
** **
Build, Ship and Run?
Most things in IT start in the building phase. In the case of Docker, you might want to consider a little bit more time in the phase before: preparation. Before just building things, you will benefit from a clear strategy. This starts with how you want to divide applications, and what makes a container actually a container. Right now the consensus seems to be a unit, which has one primary function (e.g. be a database server, or provide a web application). Whatever you choose, ensure that there is a definition in place within your organization. From there start building containers according to that strategy.
Building
The building process is one of the most interesting parts. Here images gets build, which then will be used for running new containers. At this stage security awareness and implementation is all depending on the skillset of the builder. Unfortunately, developers usually have a lower urgency to do things the secure way, than most system administrators do. Where the developer has the focus “get it running”, the system administrator cares more about system stability.
The Dockerfile
Docker build files, usually with the name Dockerfile, are small scripts to guide the build process. They instruct the docker binary how to create an image, and what commands to execute. The first thing is defining the base image, from which the container will be build. Usually defining the maintainer is next, followed up by installing packages. If you will create, tune or analyze a Dockerfile, it is important to know these basic commands, to determine what the container is actually doing. While the commands might have very self-explanatory names, they have small subtleties in them. Just copy-paste an existing Dockerfile and adjust it, will not always give the results you are seeking.
<td width="475">
<strong>Function</strong>
</td>
<td width="475">
Copy archives, downloads or data into the image
</td>
<td width="475">
Define default command to run (usually the service)
</td>
<td width="475">
Copy data into the image
</td>
<td width="475">
Define an environment variable
</td>
<td width="475">
Makes a port available for incoming traffic to the container
</td>
<td width="475">
Define the base image, which contains a minimal operating system
</td>
<td width="475">
Maintainer of the image
</td>
<td width="475">
Execute a command or script
</td>
<td width="475">
Make directory available (e.g. for access, backup)
</td>
<td width="475">
Change the current work directory
</td>
Best practices
Docker provides extensive documentation regarding the build process, including a best practices document [1]. After analyzing hundreds of build files (Dockerfile), we can conclude that many builders definitely do not follow these best practices. Issues are varying from skipping simple optimization steps when installing software components, up to using “chmod 777” on data directories. If you are using Docker within your organization, analyzing build files will definitely give an idea about the best practices applied within this area. Since we are talking about DevOps and automation, the open source auditing tool Lynis[2] helps you to check for some of the best practices in your Dockerfile.
Steering the ship
Even with lacking security awareness, or missing security features, not all hope is lost. Docker provides a few features:
- SELinux/AppApparmor support - limit processes what resources they can access
- Capabilities support - limit the maximum level a functions (or “roles”) a process can achieve within the container
- Seccomp support - allow/disallow what system calls can be used by processes
- docker exec - no more SSH in containers for just management
Additionally we can use iptables, to limit the network traffic streams even further. On the host system, you might apply technologies like GRSEC and PaX, followed by other generic system hardening practices.
Conclusion
When we look at the world of vessels and containers, it becomes clear that container technology is not very mature. When we look specifically at the security level, there is even more room for improvement. At least Docker gave both the technology and security awareness a boost, resulting in the first signs of a healthy ecosystem. The existing security features definitely look promising and worth investigating. Let’s hope this article is outdated in a few years. For now, wishing you a great and safe trip.
[1] https://docs.docker.com/articles/dockerfile_best-practices/
[2] https://github.com/CISOfy/Lynis/
This article has been published in issue 45 of (IN)SECURE Magazine and reposted with permission.