PostgreSQL, also referred to as Postgres, is the leading object-relational database system. It’s popular because of its high level of compliance with the SQL standard and inclusion of additional features that simplify working with complex datasets at scale.
PostgreSQL uses a traditional client-server architecture so you need to run it independently of your application’s code. In this guide, you’ll deploy a PostgreSQL server instance as a Docker container. This avoids adding packages to your host machine and helps to isolate your database from the other parts of your stack. Make sure you’ve got Docker installed before you continue.
PostgreSQL has an official image on Docker Hub which is available in several different variants. Tags let you select between major PostgreSQL versions from v9 to v14 and choose the operating system used as the base image. Alpine, Debian Stretch, and Debian Bullseye are offered.
For the purposes of this tutorial, we’ll use the
postgres:14 tag which provides PostgreSQL 14 atop Bullseye. You’re free to select a different version to suit your requirements.
Start a PostgreSQL container using the
docker run command:
docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=<password> -v postgres:/var/lib/postgresql/data postgres:14
You must supply a value for the
POSTGRES_PASSWORD environment variable. This defines the password which will be assigned to Postgres’ default superuser account. The username defaults to
postgres but can be changed by setting the
POSTGRES_USER environment variable.
-v flag is used to mount a Docker volume to the PostgreSQL container’s data directory. A named volume called
postgres is referenced; Docker will either create it or reattach the volume if it already exists. You should use a volume to store your database outside the container. Without one you’ll use your data when the container stops.
PostgreSQL listens on port 5432 by default. The container port is bound to port 5432 on your Docker host by the
-p flag. The
-d flag is used to start the container in detached mode, effectively making it a background service that keeps running until stopped with
Supplying the Password as a File
If you’re uncomfortable about supplying your superuser password as a plain-text CLI flag, you can inject it as a file via a volume instead. You should then set the
POSTGRES_PASSWORD_FILE environment variable to give Postgres the path to that file:
docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-password -v ./postgres-password.txt:/run/secrets/postgres-password -v postgres:/var/lib/postgresql/data postgres:14
This technique also works for
POSTGRES_USER and other supported environment variables.
Connecting to Your Database
As PostgreSQL was bound to port 5432 above, you could connect to your database on
localhost:5432 from any compatible client. Use the credentials you assigned as environment variables when starting the container.
The Docker image also includes the
psql binary which you can invoke with
docker exec. Use this to quickly interact with your database from a PostgreSQL shell within the container.
docker exec -it postgres psql -U postgres
Connecting From Other Docker Containers
Creating a Docker network is the preferred way to access PostgreSQL from other containers on the same host. This avoids binding the Postgres server’s port and potentially exposing the service to your host’s wider network.
Create a Docker network:
docker network create my-app
Start your Postgres container with a connection to the network by using the
--network flag with
docker run -d --name postgres --network my-app -e POSTGRES_PASSWORD=<password> -v postgres:/var/lib/postgresql/data postgres:14
Now join your application container to the same network:
docker run -d --name api --network my-app my-api:latest
The containers in the network can reach Postgres using the
postgres hostname, as this is the
name assigned to the Postgres container. Use port 5432 to complete the connection.
You can pass PostgreSQL server options using
-c flags after the image name in your
docker run command:
docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=<password> -v postgres:/var/lib/postgresql/data postgres:14 -c max_connections=100
Everything after the image name gets passed to the command started in the container. This command will be the PostgreSQL server binary in the case of the Postgres image.
You can use a custom config file when you’re setting the values of several options. You’ll need to use another Docker volume to mount your file into the container, then supply one
-c flag to instruct Postgres where to look:
docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=<password> -v ./postgres.conf:/etc/postgresql/postgresql.conf -v postgres:/var/lib/postgresql/data postgres:14 -c config_file=/etc/postgresql/postgresql.conf
This example uses a Docker bind mount to get the
postgres.conf file in your working directory mounted into the container’s
/etc/postgresql directory. For a reference of the options you can set with binary flags or config file directives, refer to the PostgreSQL documentation.
Seeding the Database
The Docker image supports seed files placed into the
/docker-entrypoint-initdb.d directory. Any
.sql.gz files will be executed to initialize the database. This occurs after the default user account and
postgres database have been created. You can also add
.sh files to run arbitrary shell scripts. All scripts are executed in alphabetical order.
This mechanism means all you need to seed your database is a set of SQL or shell scripts named in the correct sequential order. Mount these into your new container using a
-v flag with
docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=<password> -v ./db-seed-files/:/etc/docker-entrypoint-initdb.d -v postgres:/var/lib/postgresql/data postgres:14
The initialization scripts will only be used when the Postgres data directory is empty. For practical purposes, that means they’ll run the first time the container starts with a new empty volume attached.
Creating a Custom Database Image
You could choose to encapsulate your config file and initialization scripts in your own Docker image. This would let anyone with access to the image spin up a new PostgreSQL instance that’s preconfigured for your application. Here’s a simple Dockerfile which you could use:
FROM postgres:14 COPY postgres.conf /etc/postgresql/postgresql.conf COPY db-seed-files/ /etc/docker-entrypoint-initdb.d/ CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
Build your custom image:
docker build -t custom-postgres:latest .
The build instructions in the Dockerfile will copy the PostgreSQL config file and initialization scripts from your working directory and embed them into the container image. Now you can start a database container without manually supplying the resources:
docker run -d --name custom-postgres -p 5432:5432 -e POSTGRES_PASSWORD=<password> -v postgres:/var/lib/postgresql/data custom-postgres:latest
Should You Containerize Your Production Database?
It can be difficult to decide whether to run a database in Docker. Containerizing PostgreSQL makes for an easier set up experience but is sometimes more challenging to maintain. You need to take care when managing your container to avoid data loss in the future. Docker also adds a modest performance overhead which is worth considering when you anticipate tour database will be working with very large data volumes.
Docker’s benefits are increased portability, ease of scaling, and developer efficiency. Containerizing your database lets anyone spin up a fresh instance using Docker, without manually installing and configuring PostgreSQL first. Writing a Dockerfile for your PostgreSQL database that adds your config file and SQL seed scripts is therefore a good way to help developers rapidly start new environments.
PostgreSQL is an advanced SQL-based database engine that adds object-relational capabilities. While you may choose to run a traditional deployment in production, using a containerized instance simplifies set up and helps developers quickly spin up their own infrastructure.
The most critical aspect of a Dockerized deployment is to ensure you’re using a volume to store your data. This will allow you to stop, replace, and update your container to a later image version without losing your database. Beyond storage you should assess how you’re going to connect to Postgres and avoid binding ports to your host unless necessary. When connecting from another container, it’s best to use a shared Docker network to facilitate access.
- › NZXT Signal 4K30 Capture Card Review: Lossless High-Quality Footage
- › What’s New in Windows 11’s 22H2 Update: Top 10 New Features
- › How Much Does It Cost to Operate an Electric Lawn Mower?
- › A World Without Wires: 25 Years of Wi-Fi
- › T-Mobile Is Selling Your App Activity: Here’s How to Opt Out
- › What Are the Best Nintendo Switch Games in 2022?