Security Best Practices for Building Docker Images

Docker simplifies software packaging by creating small software units. It starts with a base OS image, followed by software installation and finally the configuration adjustments. For building your own images, Docker uses small build files, with the less than original name Dockerfile.

Docker build files simplify the build process and help creating consistent containers, over and over. Unfortunately developers don’t always take security into account during the build process, resulting in software which is installed insecurely. In this post we have a look at how to improve several areas within the build process and secure software properly.

Basics

Normally Docker build files are named Dockerfile and contain a set of instructions to build an image. With newer versions of Docker you can alter this name, but for convenience the default name can still be used. The building itself is done with the docker build command, which parses the build file and instructs what steps should be performed to build your custom image.

Documentation

While a Dockerfile may look simple for the author of the file, the includes steps aren’t always seem logical for others. Therefore it is wise to implement the following components:

  • Maintainer
  • Comments
  • Version Control

Maintainer

Usually a file has an owner, or maintainer of the file. By specifying the name and contact details, other developers or users of the software can make suggestions. While a Dockerfile seems to be perfect right now, it may be less optimal in the future.

Related Dockerfile argument: MAINTAINER Firstname Lastname <e-mail address> <other information>

Comments

Like good source code, scripts and build files should be properly documented as well. With the # sign, lines can be ignored by the build process, while at the same time give valuable information about the steps involved to readers of the file.

Version Control

Most developers already use tools like Git to maintain software versions. With the Dockerfile being an important part of the build process, this file definitely needs a place as well.

Installation of software

Usually one of the first steps in a Docker build file, is the installation of the required software components. The best practices describe to run first a repository update, followed by a chained installation process. What this means is combining several commands, to properly use the caching mechanism and at the same time stop if 1 of the commands in the chain fails.

Wrong method

RUN apt-get update
RUN apt-get -q -y install lynis

This is wrong because it may result in caching issues, which will effect proper execution of the second command.

Good method

RUN apt-get update && apt-get -q -y install lynis

When using just installing a few packages, you might want to put everything on one line. However, when several packages needs to be installed, terminate the line with a backslash and start at the next line.

RUN apt-get update \
    apt-get -q -y install lsof \
    lynis

If you want to do things properly, sort lines for easier reading and clean up after you are done installing the packages. This can be done by adding cleanup steps to the chain: && apt-get clean && rm -rf /var/lib/apt/lists/*

RUN apt-get update \
    apt-get -q -y install lsof \
    lynis

Repositories

When possible use the original repositories. They are tuned for optimal performance and minimal in size. Sure, you could save a few bits here and there, but it’s a minimal gain. Creating your own base image, also means you need to keep it up-to-date.

Opening network ports

Most software components listen to a network port for communications. This may be frontend traffic for the related user traffic (e.g. web server), or backend traffic like a database connection.

# Expose SSL default port
EXPOSE 443

Limit the amount of ports only to what is strictly needed for accessing the services. Try to avoid opening up debugging interfaces, or other backdoors. Fine for development, but make sure it won’t end up in your production environment.

Installation of external software components

Almost every container needs software. For internal created components, they can be copied into the image (e.g. with the ADD or COPY statement).

Copying files

When adding files into the image, the COPY statement is preferred. To ensure proper usage of the cache, separate COPY statements (opposed to package installation activities). This helps in performance, by invalidating just some parts of the cache.

Downloading files

Most software is available online, which means it has to be downloaded. While there is nothing wrong with downloading files, we need to be fairly sure that what we have downloaded, is what we think it is. In other words, we need to ensure the integrity of the file download as a minimum. Even better is if we can check the authenticity of a file, by using signed software packages. Usually bigger software packages provides their downloads via HTTPS and with a signature.

The worst possible scenario for a download in a Dockerfile, is it via HTTP only, without any checking. Unfortunately this still occurs on a regular basis, making these builds susceptible to man-in-the-middle attacks.

Use cURL/wget instead of ADD

To limit the size of an image, the usage of cURL or wget is preferred. By using ADD files will be extracted into the image, increasing size. With the goal of keeping things to a minimum, it is better to use the other tools. Additionally, the command can be appended directly with an integrity check, which is not possible when using ADD.

Disk, Directories and Mounts

The Docker build file allows defining storage areas for your application with the help of the VOLUME statement. Only add those directories which are necessary. Keep things as small and limited as possible. Again, document why this path is required.

Working directory

Instead of using the combination of “cd /data && ./runscript.sh”, the WORKDIR statement changes the current work directory. This helps with readability and simplifies auditing Dockerfiles.

Running Processes

Processes can be started with the CWD statement. For example starting Lynis:

CMD [“lynis”, “-c”, “-Q”]

Ensure that your process in the path, or use the full path.

Environment settings

By using the ENV statement, we can define environment settings. A common one is to define the path, for your custom binary location.

ENV PATH /usr/local/yourpackage/bin:$PATH

Be aware that environment variables won’t always work the same under different shells or on other platforms.

Active User

When possible, the least amount of permissions should be used, also during execution commands. With the USER statement, the permissions can be dropped from root to a non-privileged user.

Auditing tool for Docker

After reading these tips, you might want to check your files. Wouldn’t it be great if there was a tool to do this for you? Well, gladly there is a “Docker auditing tool”. Download the free Lynis version and audit it with:

lynis audit dockerfile <file>

This command will initialize the Docker related tests and performs a security related scan on the specified Dockerfile.

Conclusion

If you want to create solid and secure Docker build files, these are the things you should do with your Dockerfile:

  • Add a maintainer
  • Combine different apt/yum commands by “chaining” them.
  • Document the file properly and use versioning.
  • When possible download files via HTTPS, use signed software packages or have at least a checksum validation.
  • Set your permissions of files as tight as possible. No chmod 777, keep that for your development system.

Got more tips for safe Dockerfiles? We love to hear!

Feedback

Small picture of Michael Boelen

This article has been written by our Linux security expert Michael Boelen. With focus on creating high-quality articles and relevant examples, he wants to improve the field of Linux security. No more web full of copy-pasted blog posts.

Discovered outdated information or have a question? Share your thoughts. Thanks for your contribution.

Mastodon icon