The steps below will get you up and running with a local development environment. All of these commands assume you are in the root of your generated project.
If you're new to Docker, please be aware that some resources are cached system-wide and might reappear if you generate a project multiple times with the same name.
If you don't have it yet, follow the installation instructions.
For development without docker see this page.
The project includes a
Makefile with a set of convenience commands to help you get started.
This can take a while, especially the first time you run this particular command on your development system:
$ make build # the command above is just a shortcut for $ docker-compose -f local.yml build
This brings up Django, a worker node PostgreSQL, Redis and Mailhog. The first time it is run it might take a while to get started, but subsequent runs will occur quickly.
$ make run
Open a terminal at the project root and run the following for local development:
$ docker-compose -f local.yml up # or use the shortcut $ make run
Go to http://localhost:8000 and you should see the default landing page!
The Vanty Starter Kit Sign up flow works out of the box. By default you can already register and log in a user.
However for development, it is necessary to create a superuser that will be able to access the Django admin and Control Panel.
We added a modified createsuperuser command that will also create a tenant for the new superuser.
# the shortcut $ make verified_superuser
The command above will prompt you to add a password and will automatically create a superuser that using the email that you specified when building the project. To use another email run the command manually like below.
$ docker-compose -f local.yml python manage.py create_verified_superuser [email protected]
By default, docker will always use a docker-compose.yml or compose.yml file in the root of your folder. You can always change this by setting the environment variable
COMPOSE_FILE to point to another file for example
local.yml like this:
$ export COMPOSE_FILE=local.yml
You can now run the following and docker will use the correct file.
$ docker-compose up
To run in a detached (background) mode, just:
$ docker-compose up -d
As with any shell command that we wish to run in our container, this is done using the
docker-compose -f local.yml run --rm command: :
$ docker-compose -f local.yml run --rm django python manage.py migrate $ docker-compose -f local.yml run --rm django python manage.py createsuperuser
django is the target service we are executing the commands against.
The local environment brings up several containers that are essential for local development. This includes cache, web app, database, worker and mail server containers. That is a lot but docker abstracts that away so that you don't have to do this manually.
We will include a few excerpts from your project's
local.yml and highlight the important bits.
Database service (excerpt local.yml)
postgres: build: context: . dockerfile: ./compose/production/postgres/Dockerfile volumes: - local_postgres_data:/var/lib/postgresql/data - local_postgres_data_backups:/backups env_file: - ./.envs/.local/.postgres
Let's look at the
env_file section enlisting
./.envs/.local/.postgres. Generally, the stack's behavior is governed by a number of environment variables (\<span class="title-ref">env(s)\</span>, for short) residing in
.envs/, for instance, this is what is generated for you: :
.envs ├── .local │ ├── .django │ └── .postgres └── .production ├── .django └── .postgres
By convention, for any service
sI in environment
e (you know
someenv is an environment when there is a
someenv.yml file in the project root), given
sI requires configuration, a
.envs/.e/.sI \<span class="title-ref">service configuration\</span> file exists.
Consider the aforementioned
# PostgreSQL # ------------------------------------------------------------------------------ POSTGRES_HOST=postgres POSTGRES_DB=<your project slug> POSTGRES_USER=XgOWtQtJecsAbaIyslwGvFvPawftNaqO POSTGRES_PASSWORD=jSljDz4whHuwO3aJIgVBrqEml5Ycbghorep4uVJ4xjDYQu0LfuTZdctj7y0YcCLu
The three envs we are presented with here are
POSTGRES_PASSWORD (by the way, their values have also been generated for you). You might have figured out already where these definitions will end up; it's all the same with
django service container envs.
One final touch: should you ever need to merge
.envs/.production/* in a single
.env run the
$ python merge_production_dotenvs_in_dotenv.py
.env file will then be created, with all your production envs
residing beside each other.
Django and worker service
django: build: context: . dockerfile: ops/compose/local/django/Dockerfile image: advantchdemo_local_django restart: unless-stopped depends_on: - postgres - mailhog volumes: - .:/app:z env_file: - ./.envs/.local/.django - ./.envs/.local/.postgres ports: - "8000:8000" command: /start worker: image: advantchdemo_local_django command: python manage.py run_huey -w3 env_file: - ./.envs/.local/.django - ./.envs/.local/.postgres depends_on: - postgres - django - mailhog volumes: - .:/app:z
In addition to the web container, the compose file will also bring up worker service for running async tasks. The worker container is a essentially a copy of the django service, except that instead of running a server, it will run a worker task. If this is consuming too many system resources you can also use a procfile based approach to run all of this in one container.
When developing locally you can go with [MailHog] for email testing
use_mailhog was set to
y on setup. To proceed,
mailhogcontainer is up and running;