Docker is a technology for packaging components of your stack as isolated containers. It’s common practice to run each of your processes in its own container, creating a clean divide between components. This enhances modularity and lets you access the scalability benefits of containerization.
There can still be situations where you want to run multiple services within a single container. While this doesn’t come naturally in the Docker ecosystem, we’ll show a few different approaches you can use to create containers with more than one long-lived process.
Identifying the Problem
Docker containers run a single foreground process. This is defined by the image’s
ENTRYPOINT is set within an image’s
CMD can be overridden when creating containers. Containers automatically stop when their foreground process exits.
You can launch other processes from the
CMD but the container will only stay running while the original foreground process is alive. Keeping the container operational through the combined lifespan of two independent services isn’t directly possible using the
Wrapping Multiple Processes in One Entrypoint
Wrapper scripts are the simplest solution to the problem. You can write a script that starts all your processes and waits for them to finish. Setting the script as your Docker
ENTRYPOINT will run it as the container’s foreground process, keeping the container running until one of the wrapped scripts exits.
#!/bin/bash /opt/first-process & /opt/second-process & wait -n exit $?
This script starts the
/opt/second-process binaries inside the container. The use of
& allows the script to continue without waiting for each process to exit.
wait is used to suspend the script until one of the processes does terminate. The script then exits with the status code issued by the finished script.
This model results in the container running both
second-process until one of them exits. At that point, the container will stop, even though the other process may still be running.
To use this script, modify your Docker image’s
CMD to make it the container’s foreground process:
ENTRYPOINT ["/bin/sh"] CMD ["./path/to/script.sh"]
--init Container Option
One challenge with managing container processes is effectively cleaning up as they exit. Docker runs your
CMD as process ID 1, making it responsible for handling signals and eliminating zombies. If your script doesn’t have these capabilities, you could end up with orphaned child processes persisting inside your container.
docker run command has an
--init flag that modifies the entrypoint to use
tini as PID 1. This is a minimal init process implementation which runs your
CMD, handles signal forwarding, and continually reaps zombies.
It’s worthwhile using
--init if you expect to be spawning many processes and don’t want to manually handle the clean-up. Tini is a lightweight init flavor that’s designed for containers. It’s much smaller than fully fledged alternatives like
Using a Dedicated Process Manager
Manual scripting quickly becomes sub-optimal when you’ve got a lot of processes to manage. Adopting a process manager is another way to run several services inside your Docker containers. The process manager becomes your
ENTRYPOINT and has responsibility for starting, maintaining, and cleaning up after your worker processes.
There are several options available when implementing this approach.
supervisord is a popular choice which is easily configured via an
[program:apache2] command=/usr/sbin/apache2 -DFOREGROUND [program:mysqld] command=/usr/sbin/mysqld_safe
This config file configures
supervisord to start Apache and MySQL. To use it in a Docker container, add all the required packages to your image, then copy your
supervisord config file to the correct location. Set
supervisord as the image’s
CMD to run it automatically when containers start.
FROM ubuntu:latest RUN apt-get install -y apache2 mysql-server supervisor COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf ENTRYPOINT ["/bin/sh"] CMD ["/usr/bin/supervisord"]
supervisord runs continually, it’s not possible to stop the container when one of your monitored processes exits. An alternative option is
s6-overlay which does have this capability. It uses a declarative service model where you place service scripts directly into
# Add s6-overlay to your image ADD https://github.com/just-containers/s6-overlay/releases/download/v188.8.131.52/s6-overlay-noarch.tar.xz /tmp RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz RUN printf "#!/bin/shn/usr/sbin/apache2 -DFOREGROUND" > /etc/services.d/first-service/run RUN chmod +x /etc/services.d/first-service/run # Use s6-overlay as your image's entrypoint ENTRYPOINT ["/init"]
You can add an executable
finish script within your service directories to handle stopping the container with
s6-overlay will automatically run these scripts when its process receives a
TERM signal due to the
Finish scripts receive the exit code of their service as their first argument. The code is set to 256 when the service is killed due to an uncaught signal. The script needs to write the final exit code to
/run/s6-linux-init-container-results/exitcode; s6-overlay reads this file and exits with the value within, causing that code to be used as your container’s stop code.
#!/bin/sh echo "$1" > /run/s6-linux-init-container-results/exitcode
When Should You Run Multiple Processes In a Container?
This technique is best used with tightly coupled processes that you can’t separate to run as independent containers. You might have a program that relies on a background helper utility or a monolithic application that performs its own management of individual processes. The techniques shown above can help you containerize these types of software.
Running multiple processes in a container should still be avoided wherever possible. Sticking to a single foreground process maximizes isolation, prevents components interfering with each other, and improves your ability to debug and test specific pieces. You can scale components individually using container orchestrators, giving you the flexibility to run more instances of your most resource-intensive processes.
Containers usually have one foreground process and run for as long as it’s alive. This model aligns with containerization best practices and lets you glean the most benefits from the technology.
In some situations you might need multiple processes to run in a container. As all images ultimately have a single entrypoint, you must write a wrapper script or add a process manager that takes responsibility for starting your target binaries.
Process managers give you everything you need but bloat your images with extra packages and configuration. Wrapper scripts are simpler but may need to be paired with Docker’s
--init flag to prevent zombie process proliferation.