Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-2)

Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-2)

We will be discussing about Dockerizing your Django project for the deployment.

Table of contents

If you haven't checked part-1 of this topic.

Please Checkout Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-1)

In this tutorial, We'll be configuring our Postgres in our Django application.So,Let's get started.

Checkout the Git repository for reference: Git Repository

Postgres

In order to set up Postgres, we will have to perform the following steps:

  • Include a new service to the docker-compose.yml file

  • Modify the Django settings

  • Install Psycopg2 package

Let's update the docker-compose.yml file

version: '3.10'

services:
  web:
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - static_data:/app/static
    ports:
      - "8000:8000"
    restart: always
    env_file:
      - ./.env
    depends_on:
      - db
  db:
    image: postgres:13.0-alpine
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data:rw
    env_file:
      - .env
#or, use environment variable directly
    # environment:
    #    - POSTGRES_USER=${DB_USERNAME}
    #    - POSTGRES_PASSWORD=${DB_PASSWORD}
    #    - POSTGRES_DB=${DB_NAME}
volumes:
  static_data:
  postgres_data:

To ensure that the data is retained beyond the container's lifespan, we set up a volume configuration that maps the "postgres_data" directory to the "/var/lib/postgresql/data/" directory inside the container.

To properly configure the web service, it is necessary to update the .env file with additional environment variables.

SECRET_KEY=
ALLOWED_HOSTS= localhost 127.0.0.1 [::1]
DEBUG=True

# Database
DB_NAME=testing
DB_USERNAME=postgres
DB_PASSWORD=36050
DB_HOSTNAME=localhost
DB_PORT=5432

Update the DATABASES setting in your settings.py file with the following code

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': config('DB_NAME'),
        'USER': config('DB_USERNAME'),
        'PASSWORD': config('DB_PASSWORD'),
        'HOST': config('DB_HOSTNAME'),
        'PORT': config('DB_PORT', cast=int),
    }
}

Next, we will modify the Dockerfile to include the necessary packages for Psycopg2 installation.

# official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install pyscopg2 dependencies
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev linux-headers

#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

# copy project
COPY . .

Make sure to install psycopg,create Database and add psycopg to your requirement.txt file using pip freeze > requirements.txt

After that, Build the new image with two services:

$ docker-compose up -d --build

Then run the migrations:

$ docker-compose exec app python manage.py migrate --noinput

You can check that the volume was created as well by running:

$ docker volume inspect django-on-docker_postgres_data

You should see something similar to:

[
    {
        "CreatedAt": "2021-08-23T15:49:08Z",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "django-on-docker",
            "com.docker.compose.version": "1.29.2",
            "com.docker.compose.volume": "postgres_data"
        },
        "Mountpoint": "/var/lib/docker/volumes/django-on-docker_postgres_data/_data",
        "Name": "django-on-docker_postgres_data",
        "Options": null,
        "Scope": "local"
    }
]

Afterward, create a new file named "entrypoint.sh" in the "root" directory of your project to ensure that Postgres is functioning correctly before applying the migrations and launching the Django development server.

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z "$DB_HOSTNAME " "$DB_PORT"; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi
#python manage.py collectstatic --no-input
exec "$@"

Update the file permissions locally:

$ chmod +x app/entrypoint.sh

Then, update the Dockerfile to copy over the entrypoint.sh file:

# official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install pyscopg2 dependencies
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev linux-headers

#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

#media files
RUN mkdir -p /media
RUN mkdir -p /static

# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh

# copy project
COPY . .

# run entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]

After that test it out:

  1. Re-build the images

  2. Run the containers

  3. Try http://localhost:8000/

Gunicorn

To prepare for production environments, we will include Gunicorn, which is a WSGI server that is suitable for production use, in the requirements file.So, Firstof all install Gunicorn and add it to the requirements.txt file.

To continue utilizing Django's internal server for development, you can generate a fresh compose file named docker-compose.prod.yml solely for production purposes.

version: '3.10'

services:
  web:
    build: .
    command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_data:/app/static
    ports:
      - "8000:8000"
    restart: always
    env_file:
      - ./.env.prod
    depends_on:
      - db
  db:
    image: postgres:13.0-alpine
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data:rw
    env_file:
      - .env.prod
volumes:
  static_data:
  postgres_data:

Here, We're running Gunicorn rather than the Django development server.

Now Let's create .env.prod file for environemental variables:

SECRET_KEY=
ALLOWED_HOSTS= localhost 127.0.0.1 [::1]
DEBUG=True

# Database
DB_NAME=testing
DB_USERNAME=postgres
DB_PASSWORD=36050
DB_HOSTNAME=localhost
DB_PORT=5432

Bring down the development containers (and the associated volumes with the -v flag):

$ docker-compose down -v

Then, build the production images and spin up the containers:

$ docker-compose -f docker-compose.prod.yml up -d --build

Now, we need to create a production Dockerfile (Dockerfile.prod) and an entrypoint.prod.sh file inside the scripts directory of the project's root. The entrypoint.prod.sh file will serve as the production script file for the entrypoint.

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z "$DB_HOSTNAME " "$DB_PORT"; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi
python manage.py collectstatic --no-input
exec "$@"
# official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install pyscopg2 dependencies
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev linux-headers

or you can use
#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

#media files
RUN mkdir -p /media
RUN mkdir -p /static

# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh

# copy project
COPY . .

# run entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]

Now, update the compose production file with docker production file:

services:
  web:
    build:
     context: .
     dockerfile: Dockerfile.prod
    command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_data:/app/static
    expose:
      - 8000
    restart: always
    env_file:
      - ./.env.prod
    depends_on:
      - db

Try it out:

$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec app python manage.py migrate --noinput

Ngnix

In terms of flexibility, Nginx offers an unparalleled degree of control. By configuring it as a reverse proxy for Gunicorn, you can achieve almost anything. To accomplish this, add the Nginx service to the production docker-compose file.

version: '3.10'

services:
  web:
    build:
      context: .
    command: gunicorn config.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_data:/app/static
      - media_data:/app/media
    expose:
      - 8000
    restart: always
    env_file:
      - ./.env
    depends_on:
      - db
  db:
    image: postgres:13.0-alpine
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data:rw
    environment:
      - POSTGRES_USER=${DB_USERNAME}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
  nginx:
    build: ./nginx
    volumes:
      - static_data:/app/static
      - media_data:/app/media
    ports:
      - "8008:80"
    depends_on:
      - web
volumes:
  static_data:
  media_data:
  postgres_data:

Create the following files and folders:

└── nginx
    ├── Dockerfile
    └── nginx.conf
    └── uwsgi_params

Add these code inside Dockerfile:

FROM nginx:1.21-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
COPY uwsgi_params /etc/nginx/uwsgi_params

Create uwsgi_params and add


uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  PATH_INFO          $document_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  REQUEST_SCHEME     $scheme;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;

create nginx.conf and add

upstream django_project {
    server web:8000;
}

server {
    listen 80;

    location /static {
        alias /static;
    }

    location /media {
        alias /media;
    }

    location / {
        uwsgi_pass web:8000;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
        include /etc/nginx/uwsgi_params;
    }
}

for the static and media file, add inside settings.py

#for static files
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR / "staticfiles"

#for media files
MEDIA_URL = "/media/"
MEDIA_ROOT = BASE_DIR / "mediafiles"

For the last time,re-build run and try it out:

$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput

Summary:

So here's the end, We learned each steps to containerize a Django web application along with Postgres for development purposes. Additionally, it illustrated the creation of a Docker Compose file suitable for production environments, which incorporated Gunicorn and Nginx to manage static and media files. This enables local testing of a production setup.

Thank you so much, Bye.