How to run multiple processes in a single docker container with Supervisor
Docker is a powerful tool that can help package applications and their dependencies in a lightweight way while keeping them isolated. In this article, you will learn how to run multiple processes in a single container using Supervisor.
Create the service
A simple Fastapi service that sends email reports every three minutes and displays some email logs.
from typing import Union
from fastapi import FastAPI
from config import huey
app = FastAPI()
@huey.periodic_task(crontab(minute='*/20'))
def send_email_reports():
"""Cron task running every 20 minutes"""
# fetch some data from db
# send an email report
@app.get("/logs")
def logs():
"""Display logs"""
return {"emails_sent": [...]}
Notes
-
@huey.periodic_task
- this is our emailing sending function. It uses a simple task queue called Huey. A lightweight alternative to Celery.
Create the dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.9
# install supervisor
RUN apt-get install -y supervisor
# copy our script
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# make them executable
RUN chmod +x /usr/src/app/scripts/*.sh
# copy app requirements
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
# run the process manager
CMD ["/usr/bin/supervisord"]
Notes
This is a simple Dockerfile from the Fastapi docs, we have modified it to install supervisor as well as add our supervisord configuration scripts.
-
RUN apt-get install -y supervisor
- install supervisor and copy script to the container. -
RUN chmod +x
- modify script to be executable -
CMD ["/usr/bin/supervisord"]
- will run the process manager.
Configure the process manager
At start-up, we need to run two processes. One for the API service and another for the worker.
[supervisord]
nodaemon=true
[program:app]
directory=/app/
command=uvicorn app.main:app --host 0.0.0.0 --port %(ENV_PORT)s
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
[program:worker]
directory=/app/
command=huey_consumer.py app.main.huey --workers=2
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
Notes
-
nodeamon=true
- this starts the process as a foreground process and lets Docker manage its(Supervisor) lifecycle. -
command=uvicorn app.main:app --host 0.0.0.0 --port %(ENV_PORT)s
- this will start the main app and listen to a port provided via aPORT
environment variable. This is useful for environments like Google Cloud Run which requires that a container listens to a specific port provided at runtime. -
command=huey_consumer.py app.main.huey --workers=2
- this will start the task queue with two workers. -
stdout_logfile=/dev/stdout
- configure logging & errors to be redirect to stdout
Managing the lifecycle
When one of the services dies the container should also fail. Let's update the configuration.
# If an application crashes during runtime
# we want the entire container to die.
[eventlistener:processes]
directory=/app/
command=./scripts/stop-supervisor.sh
events=PROCESS_STATE_STOPPED,PROCESS_STATE_EXITED,PROCESS_STATE_FATAL
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
Notes
-
command=./scripts/stop-supervisor.sh
- add a new process that listens to events that are emitted when an app fails. Thestop-supervisor.sh
script is below.
#!/usr/bin/env bash
set -Eeo pipefail
#http://supervisord.org/events.html#event-listeners-and-event-notifications
printf "READY\n";
while read -r; do
echo -e "\e[31m Service was stopped or one of it's services crashed,
see the logs above for more details. \e[0m" >&2
kill -SIGTERM "$(cat supervisord.pid)"
done < /dev/stdin
Notes
-
A short script that will provide a more descriptive error when the container dies.
Conclusion
In this post, you have learned how to run multiple processes in one docker container using Supervisor. If you are using this method make sure you monitor the memory used by each process.