Securing Docker Containers
How to increase the security of your containers
Last modified: 16 April 2023
Running Docker containers securely can be challenging and involves taking several important steps to protect both the host system and the data within the container. Privilege escalation has been trivial in default cases, which are seen quite commonly.
The goal of this post is to give a few pointers on how to run containers in a more secure way.
Use an up-to-date Docker image: Make sure you use the latest Docker image available for the software you want to run. Using an outdated image can obviously lead to vulnerable applications.
Use a non-root user: By default, Docker containers run as root, which can be a security risk. Instead, it is better to run the container as a non-root user. You can create a new user in the Docker image or use the –user flag when starting the container. Remember to correct the file permissions accordingly for your application, especially in case of mounted volumes. Moreover, on the host, often it is told to add the user to the docker group, which is basically root. It would be recommended to run root-less docker all together to avoid any privilege escalation, complete guide here.
Limit the network: If no networking is needed, it can be completely disabled thus further reducing possible network movements.
Limit container capabilities: By default, Docker containers have access to all capabilities of the host system. It is good practice to limit the capabilities of the container using the –cap-drop flag.
Use read-only file systems: Mount the container’s file system as read-only to prevent any changes to the host system.
Limit resource exhaustion: Use the –memory and –cpu flags to limit the amount of memory and CPU resources that the container can use.
Use Docker secrets: Avoid hard-coding sensitive information such as passwords and API keys in Dockerfiles or environment variables (only in swarm mode, info here).
Use Docker Compose (my favourite): Use Docker Compose to manage multiple containers, networks, and volumes. This makes it super easy to define the entire application stack in a single file and start it with one command.
Securely Run Docker from Command Line
Some good command line arguments examples:
- -it starts the container in interactive mode and attaches a TTY for console access.
- –network=none disables networking for the container.
- –read-only mounts the container’s file system as read-only to prevent any changes.
- –security-opt no-new-privileges prevents the container from acquiring additional privileges.
- –security-opt apparmor=docker-default enables AppArmor security profiles for the container.
- –cap-drop=all drops all Linux capabilities from the container, limiting its abilities.
- –rm automatically removes the container when it exits.
- –user 1001 runs the container as a non-root user with UID 1001.
- –memory=512m limits the amount of memory the container can use to 512MB.
- –tmpfs /tmp mounts temporary file systems in memory to avoid writing to disk.
- –mount type=tmpfs,destination=/var/tmp mounts temporary file systems in memory for specific directories like /var/tmp, /var/log, and /var/lib/mysql.
- –secret id=my_secret,src=/run/secrets/my_secret mounts a secret in the container’s file system to securely store and manage sensitive information.
Official documentation on the
run command here.
Securely run containers with Docker Compose
Example file for docker compose:
version: '3' services: web: image: nginx:alpine ports: - "443:443" security_opt: - no-new-privileges:true - seccomp:/path/to/seccomp/profile.json cap_drop: - ALL read_only: true tmpfs: - /tmp volumes: - type: bind source: /path/to/nginx.conf target: /etc/nginx/nginx.conf read_only: true restart: always
One more important piece of advice is to carefully give the container the right file permissions on the mounted volumes. Read only volumes, easily definable with for example
-v /data:/data:ro, make sure the container will not be able to alter the data, restricting even further the locations where any write action is allowed at all.
Giving read-only permissions to the container also further reduce the risk of remnant data being copied over the file system or volume. Temporary data can reside in-memory only, with the use of temporary file systems (tmpfs), which gets deleted when the container stops. The same happens for the container itself when ran with
--rm, by deleting the whole container after its exit.
Running docker as root is simpler, requires less configuration and testing, but it exposes the host system to an uncontrolled and non-acceptable level of risk, in my opinion. Running docker as root provides more flexibility and simplicity, but it comes with a significant security risk and extra careful needs to be taken when deploying containers.
Finally, immutable containers are something I would recommend as I do like that approach quite a lot.
I hope you found this post helpful. If you have any questions or feedback, feel free to leave a comment below.