Viewing available test categories in Lynis

Test categories in Lynis

When auditing a server, it may be useful to only run a particular category of tests, like firewall related tests. In that case the –tests-category parameter can be used, together with the category name.

Available categories

To determine what categories are available, Lynis has a built-in parameter –view-categories which lists all available files. Most of the names are self-explanatory on what of tests they include. For more information about the included tests, have a look in the ./include directory, where files are listed as tests_<category>.

Example

root@host:~# ./lynis --view-categories
[+] Available test categories
 ------------------------------------
 - accounting
 - authentication
 - banners
 - boot_services
 - crypto
 - databases
 - file_integrity
 - file_permissions
 - filesystems
 - firewalls
 - hardening
 - hardening_tools
 - homedirs
 - insecure_services
 - kernel
 - kernel_hardening
 - ldap
 - logging
 - mac_frameworks
 - mail_messaging
 - malware
 - memory_processes
 - nameservices
 - networking
 - php
 - ports_packages
 - printers_spools
 - scheduling
 - shells
 - snmp
 - solaris
 - squid
 - ssh
 - storage
 - storage_nfs
 - tcpwrappers
 - time
 - tooling
 - virtualization
 - webservers

After selecting which category you want to use, simply run Lynis with ./lynis -c –tests-category firewalls to run all firewall related tests.

Security Best Practices for Building Docker Images

Security Best Practices: Building Docker Images

Docker simplifies software packaging by creating small software units. It starts with a base OS image, followed by software installation and finally the configuration adjustments. For building your own images, Docker uses small build files, with the less than original name Dockerfile.

Docker build files simplify the build process and help creating consistent containers, over and over. Unfortunately developers don’t always take security into account during the build process, resulting in software which is installed insecurely. In this post we have a look at how to improve several areas within the build process and secure software properly.

Basics

Normally Docker build files are named Dockerfile and contain a set of instructions to build an image. With newer versions of Docker you can alter this name, but for convenience the default name can still be used. The building itself is done with the docker build command, which parses the build file and instructs what steps should be performed to build your custom image.

Documentation

While a Dockerfile may look simple for the author of the file, the includes steps aren’t always seem logical for others. Therefore it is wise to implement the following components:

  • Maintainer
  • Comments
  • Version Control

Maintainer

Usually a file has an owner, or maintainer of the file. By specifying the name and contact details, other developers or users of the software can make suggestions. While a Dockerfile seems to be perfect right now, it may be less optimal in the future.

Related Dockerfile argument: MAINTAINER Firstname Lastname <e-mail address> <other information>

Comments

Like good source code, scripts and build files should be properly documented as well. With the # sign, lines can be ignored by the build process, while at the same time give valuable information about the steps involved to readers of the file.

Version Control

Most developers already use tools like Git to maintain software versions. With the Dockerfile being an important part of the build process, this file definitely needs a place as well.

 

Installation of software

Usually one of the first steps in a Docker build file, is the installation of the required software components. The best practices describe to run first a repository update, followed by a chained installation process. What this means is combining several commands, to properly use the caching mechanism and at the same time stop if 1 of the commands in the chain fails.

Wrong: 

RUN apt-get update
RUN apt-get -q -y install lynis

This is wrong because it may result in caching issues, which will effect proper execution of the second command.

Good:

RUN apt-get update && apt-get -q -y install lynis

When using just installing a few packages, you might want to put everything on one line. However, when several packages needs to be installed, terminate the line with a backslash and start at the next line.

RUN apt-get update \
    apt-get -q -y install lsof \
    lynis

If you want to do things properly, sort lines for easier reading and clean up after you are done installing the packages. This can be done by adding cleanup steps to the chain: && apt-get clean && rm -rf /var/lib/apt/lists/*

RUN apt-get update \
    apt-get -q -y install lsof \
    lynis

Repositories

When possible use the original repositories. They are tuned for optimal performance and minimal in size. Sure, you could save a few bits here and there, but it’s a minimal gain. Creating your own base image, also means you need to keep it up-to-date.

 

Opening network ports

Most software components listen to a network port for communications. This may be frontend traffic for the related user traffic (e.g. web server), or backend traffic like a database connection.

# Expose SSL default port
EXPOSE 443

Limit the amount of ports only to what is strictly needed for accessing the services. Try to avoid opening up debugging interfaces, or other backdoors. Fine for development, but make sure it won’t end up in your production environment.

 

Installation of external software components

Almost every container needs software. For internal created components, they can be copied into the image (e.g. with the ADD or COPY statement).

Copying files

When adding files into the image, the COPY statement is preferred. To ensure proper usage of the cache, separate COPY statements (opposed to package installation activities). This helps in performance, by invalidating just some parts of the cache.

Downloading files

Most software is available online, which means it has to be downloaded. While there is nothing wrong with downloading files, we need to be fairly sure that what we have downloaded, is what we think it is. In other words, we need to ensure the integrity of the file download as a minimum. Even better is if we can check the authenticity of a file, by using signed software packages. Usually bigger software packages provides their downloads via HTTPS and with a signature.

The worst possible scenario for a download in a Dockerfile, is it via HTTP only, without any checking. Unfortunately this still occurs on a regular basis, making these builds susceptible to man-in-the-middle attacks.

Use cURL/wget instead of ADD

To limit the size of an image, the usage of cURL or wget is preferred. By using ADD files will be extracted into the image, increasing size. With the goal of keeping things to a minimum, it is better to use the other tools. Additionally, the command can be appended directly with an integrity check, which is not possible when using ADD.

 

Disk, Directories and Mounts

The Docker build file allows defining storage areas for your application with the help of the VOLUME statement. Only add those directories which are necessary. Keep things as small and limited as possible. Again, document why this path is required.

Working directory

Instead of using the combination of “cd /data && ./runscript.sh”, the WORKDIR statement changes the current work directory. This helps with readability and simplifies auditing Dockerfiles.

 

Running Processes

Processes can be started with the CWD statement. For example starting Lynis:

CMD [“lynis”, “-c”, “-Q”]

Ensure that your process in the path, or use the full path.

Environment settings

By using the ENV statement, we can define environment settings. A common one is to define the path, for your custom binary location.

ENV PATH /usr/local/yourpackage/bin:$PATH

Be aware that environment variables won’t always work the same under different shells or on other platforms.

Active User

When possible, the least amount of permissions should be used, also during execution commands. With the USER statement, the permissions can be dropped from root to a non-privileged user.

 

Auditing tool for Docker

After reading these tips, you might want to check your files. Wouldn’t it be great if there was a tool to do this for you? Well, gladly there is a “Docker auditing tool”. Download the free Lynis version and audit it with:

lynis audit dockerfile <file>

This command will initialize the Docker related tests and performs a security related scan on the specified Dockerfile.

 

Conclusion

If you want to create solid and secure Docker build files, these are the things you should do with your Dockerfile:

  • Add a maintainer
  • Combine different apt/yum commands by “chaining” them.
  • Document the file properly and use versioning.
  • When possible download files via HTTPS, use signed software packages or have at least a checksum validation.
  • Set your permissions of files as tight as possible. No chmod 777, keep that for your development system.

Other Resources

Want to read more about the subject? Here are some suggestions:

Dockerfile best practices: https://github.com/docker/docker/blob/master/docs/sources/articles/dockerfile_best-practices.md

 

Got more tips for safe Dockerfiles? We love to hear!

 

Security Integration: Configuration Management and Auditing

Configuration Management and Auditing

Increased strength when combining tools for automation and security of IT environments

Tools like Ansible, Chef, and Puppet are used a lot for rapid deployment and keeping systems properly configured. These tools in itself are great for ensuring consistency over your systems.

So what is Configuration Management?

Configuration management is the art of keeping systems properly configured. Usually companies start small, which equals manual configuration. Each time a new system is deployed, it is configured manually. While there is nothing wrong with this, it becomes an issue when systems are not kept up-to-date.

The earlier mentioned tools help with orchestrating how systems should be configured. This ranges from installed packages, up to specific configuration settings. Even software patching can be performed, simplifying the process of keeping systems up-to-date.

Screenshot of Ansible tooling

Example output of Ansible automation tool

CM: When to Use?

The best moment to start using configuration management tools, is when you realized systems are different (while they were not supposed to be). Usually starting from 20-25 systems and upwards, automation of configurations will beneficial in the long term.

Speed

Another clear benefit is the speed of deployment. After all, almost no manual steps are needed. So environments which rely on this speed, are definitely a good candidate to use configuration automation.

Diversity

Companies with a lot of diversity in their operating systems, might have less benefit from configuration management tools. After all, a lot of exceptional configurations have to be made, specifying different ways to get the same result. Even the smallest action like installing a package, is a totally different set of commands between each operating system.

Picking the right tool(s)

When it comes to the differences in automation tools, there are several important areas. These are mainly the underlying programming language, the structure of files, and the way the communication occurs between the central server and the agents.

Personal preference

Usually a lot comes to the preference of the system administrator, which has to use the tool. If he/she has a strong preference for Python, a tool like Ansible might be more attractive. This simplifies installation (e.g. using pip), but also when implementing more advanced scripts. Sometimes the logic of the underlying programming language is visible there.

When selecting a tool, we suggest to have a look at the following attributes:

  • Pricing
  • Community support
  • Availability of snippets
  • Simplicity of tooling, website and documentation
  • Preference of programming language

Security Automation

Configuration management tools are also great on supporting security objectives. One area might be system hardening, in which the tooling ensures that some settings are always enforced. Even if a system administrator or developer changes a setting, it will be reversed into the preferred setting.

Continuous Auditing

By combining configuration management and auditing, we can close the loop of automation. It enables us to perform configuration management, continuous auditing, and security monitoring at the same time. Most of the gaps will be closed by one tool, while the other one keeps an eye on existing and new risks. If for some reason something can be tuned, it will show up on the auditing side. It will be then an easy step to feed this as input and auto correct the issue.

Deploying auditing tool

It should be a surprise that also an auditing tool could be deployed automatically. In the screenshot we can see how our auditing tool Lynis is installed. After installation it will be also configured and scheduled for execution. In our case we close the loop even further, by uploading the data to a central node, and monitor for regular audits. If the central system does not receive data for a few days, something is wrong and need attention. In other words, it equals a failure somewhere in a the chain. Only then we need to do a manual check. This type of automation prevents “ghost” systems, and solves malfunctioning systems or software, which otherwise would get noticed after months…

In the upcoming time we will definitely blog some more about automation and auditing.

« Older Entries