Create a High-Availability PostgreSQL Cluster with Patroni, Vultr Load Balancer, and Object Storage
Introduction
PostgreSQL is a powerful, open-source relational database system. Patroni is a tool to create, manage, and monitor a high-availability PostgreSQL cluster using streaming replication and automated failover. etcd
is a distributed key-value store that Patroni uses to store all the configurations.
This article uses Patroni and etcd
to automate and simplify the deployment and management of PostgreSQL clusters. In a few minutes, you can have a production-grade Postgres cluster with high availability and disaster recovery.
Prerequisites
Before you begin, make sure you:
- Deploy a Vultr Object Storage instance.
- Deploy a Vultr Load Balancer.
Server Configuration
This section describes how you can set up a cluster of two Patroni instances, and three etcd
instances to store the configuration. The Patroni setup contains one PostgreSQL primary and one replica. The etcd
cluster contains three instances to enable high availability, which can cover at most one instance failure.
In this article, deploy five servers with 1GB of memory each. But production deployments require higher memory servers. These servers should be independent of each other. For example, they can be in different locations, or in a single Vultr Virtual Private Cloud (VPC).
Create the Vultr Server Instances
- Log in to your Vultr account.
- Deploy 5 Vultr Server Instances in any location.
- Label the server instances as
etcd-1
,etcd-2
,etcd-3
,patroni-1
andpatroni-2
. - Use SSH to access each of the servers as a non-root sudo user.
This article uses the following example names and addresses:
- etcd-1:
etcd-server-1-ip
- etcd-2:
etcd-server-2-ip
- etcd-3:
etcd-server-3-ip
- patroni-1:
patroni-server-1-ip
- patroni-2:
patroni-server-2-ip
Configure the etcd
Cluster
This section describes how to set up an etcd
cluster on the etcd-1
, etcd-2
, and etcd-3
servers.
Install etcd
Run the following command to download and install the
etcd
binary. Visit theetcd
GitHub Repository for the latest version.$ ETCD_VER=v3.5.7 $ DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download $ curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz $ tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp $ sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcd /usr/bin $ sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcdctl /usr/bin $ sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcdutl /usr/bin $ sudo chown root:root /usr/bin/etcd $ sudo chown root:root /usr/bin/etcdctl sudo chown root:root /usr/bin/etcdutl $ rm -rf /tmp/etcd-${ETCD_VER}-linux-amd64*
Verify the installed version.
$ etcd --version
Set Up the etcd
Service
Perform the following steps to set up the etcd
service on each etcd
server.
Create a new service user and prepare folder permissions as below.
$ sudo useradd --system --no-create-home etcd $ sudo mkdir -p /etc/etcd /var/lib/etcd $ sudo chown etcd:etcd /etc/etcd $ sudo chown etcd:etcd /var/lib/etcd
Create the configuration file
/etc/etcd/etcd.conf.yml
with the following configurations.name: YOUR_INSTANCE_NAME data-dir: /var/lib/etcd initial-cluster-state: 'new' initial-cluster-token: 'etcd-cluster' initial-cluster: YOUR_INITIAL_CLUSTER initial-advertise-peer-urls: http://YOUR_INSTANCE_ADDRESS:2380 advertise-client-urls: http://YOUR_INSTANCE_ADDRESS:2379 listen-peer-urls: 'http://YOUR_INSTANCE_ADDRESS:2380,http://127.0.0.1:2380' listen-client-urls: 'http://YOUR_INSTANCE_ADDRESS:2379,http://127.0.0.1:2379' enable-v2: true
Using a text editor such as
Vim
. Open the file/etc/etcd/etcd.conf.yml
.$ sudo vim /etc/etcd/etcd.conf.yml
Replace
YOUR_INITIAL_CLUSTER
with the following line. Replaceetcd-server-1-ip
,etcd-server-2-ip
andetcd-server-3-ip
with the corresponding IP addresses of youretcd
servers.etcd1=http://etcd-server-1-ip:2380,etcd2=http://etcd-server-2-ip:2380,etcd3=http://etcd-server-3-ip:2380
Replace
YOUR_INSTANCE_NAME
andYOUR_INSTANCE_ADDRESS
with youretcd
server name and IP Address.This article uses the following example
YOUR_INSTANCE_NAME
andYOUR_INSTANCE_ADDRESS
for each server:etcd-1
: instance name isetcd-1
and instance address isetcd-server-1-ip
etcd-2
: instance name isetcd-2
and instance address isetcd-server-2-ip
etcd-3
: instance name isetcd-3
and instance address isetcd-server-3-ip
Create the
/usr/lib/systemd/system/etcd.service
file with the following content.[Unit] Description="etcd service" After=network.target [Service] LimitNOFILE=65536 Restart=always Type=notify ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml User=etcd Group=etcd [Install] WantedBy=multi-user.target
Allow the ports
2379
and2380
through the firewall.$ sudo ufw allow 2379 && sudo ufw allow 2380
Enable & start the
etcd
service.$ systemctl enable etcd.service $ systemctl start etcd.service
Check the etcd
Service
Check
etcd
service logs to make sure there are no errors.$ journalctl -u etcd.service -f
Get the list of members in the
etcd
cluster.$ etcdctl member list -w table
Save a test key-value pair to the
etcd
cluster.$ etcdctl put greeting "Hello World!"
Get the test key value from any
etcd
node.$ etcdctl get greeting
Configure the Patroni Cluster
This section shows how to set up a Patroni cluster on the patroni-1
and patroni-2
servers.
Install PostgreSQL
Install PostgreSQL on both Patroni servers.
$ sudo apt install -y postgresql postgresql-contrib
Stop the PostgreSQL service.
$ systemctl stop postgresql
Disable the PostgreSQL service.
$ systemctl disable postgresql
Install Patroni
Install Patroni and other dependencies on both Patroni servers.
$ sudo apt install -y python3-pip python3-dev libpq-dev $ pip3 install --upgrade pip $ pip install patroni python-etcd psycopg2
Create the configuration file
/etc/patroni.yml
with the following configurations.scope: patroni_cluster name: YOUR_INSTANCE_NAME namespace: /service restapi: listen: 'YOUR_INSTANCE_ADDRESS:8008' connect_address: 'YOUR_INSTANCE_ADDRESS:8008' etcd: hosts: YOUR_ETCD_HOSTS bootstrap: dcs: ttl: 30 loop_wait: 10 retry_timeout: 10 maximum_lag_on_failover: 1048576 postgresql: use_pg_rewind: true use_slots: true parameters: hot_standby: 'on' wal_keep_segments: 20 max_wal_senders: 8 max_replication_slots: 8 slots: patroni_standby_leader: type: physical initdb: - encoding: UTF8 - data-checksums pg_hba: - host replication replicator 0.0.0.0/0 md5 - host all all 0.0.0.0/0 md5 users: admin: password: admin% options: - createrole - createdb postgresql: listen: 'YOUR_INSTANCE_ADDRESS:5432' connect_address: 'YOUR_INSTANCE_ADDRESS:5432' data_dir: /var/lib/postgresql/data pgpass: /tmp/pgpass0 authentication: replication: username: replicator password: replicate% superuser: username: postgres password: postgres% rewind: username: rewind_user password: rewind_password% tags: nofailover: false noloadbalance: false clonefrom: false nosync: false
Open the file
/etc/patroni.yml
.$ sudo vim /etc/patroni.yml
Replace
YOUR_ETCD_HOSTS
with the following line. Replaceetcd-server-1-ip
,etcd-server-2-ip
andetcd-server-3-ip
with the corresponding IP addresses of youretcd
servers.etcd-server-1-ip:2379,etcd-server-2-ip:2379,etcd-server-3-ip:2379
Replace
YOUR_INSTANCE_NAME
andYOUR_INSTANCE_ADDRESS
with yourpatroni
server name and IP address.This article uses the following example
YOUR_INSTANCE_NAME
andYOUR_INSTANCE_ADDRESS
for each server:patroni-1
: instance name ispatroni-1
and instance address ispatroni-server-1-ip
patroni-2
: instance name ispatroni-2
and instance address ispatroni-server-2-ip
Create the
/usr/lib/systemd/system/patroni.service
file with the following contents.[Unit] Description="patroni service" After=syslog.target network.target [Service] Restart=no Type=simple ExecStart=/usr/local/bin/patroni /etc/patroni.yml ExecReload=/bin/kill -s HUP $MAINPID KillMode=process TimeoutSec=30 User=postgres Group=postgres [Install] WantedBy=multi-user.target
Allow the ports
5432
and8008
through the firewall.$ sudo ufw allow 5432 $ sudo ufw allow 8008
Enable & start the Patroni service.
$ sudo systemctl enable patroni.service $ sudo systemctl start patroni.service
Check the Patroni Service
Check Patroni service logs to make sure there are no errors.
$ sudo journalctl -u patroni.service -f
Get the list of members in the Patroni cluster.
$ sudo -u postgres patronictl -c /etc/patroni.yml list
On the Patroni primary node, connect to the PostgreSQL with
psql
command.$ psql -U postgres;
Create a table to test the data replication.
create table mydata(key text, value text); insert into mydata values('foo', 'bar');
On the Patroni replica node, query the data from the new table.
$ psql -U postgres -c "select * from mydata;"
Deploy Vultr Load Balancer
This section shows how to deploy a Vultr Load Balancer to forward traffic to the PostgreSQL primary. The Load balancer performs health checks on the Patroni REST API in each Patroni instance to know which instance is the PostgreSQL primary.
Create Vultr Load Balancer
- Log in to your Customer Portal and deploy a new Vultr Load Balancer.
- Label the
Load Balancer Configuration
aspatroni-primary
. - In
Forwarding Rules
, set the following configuration:- Load Balancer: Protocol to TCP and Port to 5432
- Instance: Protocol to TCP and Port to 5432
- In
Health Checks
, set the following configuration:- Protocol: HTTP
- Port: 8008
- Interval: 3
- HTTP Path: /
- In
Firewall Rules
, clickAdd firewall rule
and set the following configuration:- Port: 5432
- IP type: IPv4
- Source: 0.0.0.0/0
- Go to your Customer Portal and open your Load Balancer.
- Click
Add instance
and select your Patroni instances.
Connect to the PostgreSQL Cluster using the Load Balancer
Connect to PostgreSQL using psql
. Replace <LOAD_BALANCER_IP>
with your Load Balancer IP. Enter the password from the /etc/patroni.yml
file in the previous step.
$ psql -h <LOAD_BALANCER_IP> -U postgres
Prepare a Vultr Object Storage
This section describes how to deploy a Vultr Object Storage to store the Write-AHead-Logging files and daily backups of your Postgres cluster.
- Create a Vultr Object Storage.
- Create a bucket
patroni-demo-bucket
inside the Object Storage. - Get your Vultr Object Storage Hostname, Access Key and Secret Key.
Backup with pgBackRest
This section describes how to set up a backup strategy with pgBackRest
in the Patroni cluster on the patroni-1
, and patroni-2
servers.
Install pgBackRest
Install pgBackRest on both Patroni servers.
$ sudo apt install -y pgbackrest
Prepare a directory for pgBackRest.
$ sudo mkdir -p /var/lib/pgbackrest $ sudo chmod 0750 /var/lib/pgbackrest $ sudo chown -R postgres:postgres /var/lib/pgbackrest $ sudo chown -R postgres:postgres /var/log/pgbackrest
Create a new file
/etc/pgbackrest.conf
with the following contents.[demo-cluster] pg1-path=/var/lib/postgresql/data pg1-port=5432 pg1-user=postgres [global] start-fast=y process-max=4 archive-timeout=300 repo1-type=s3 repo1-s3-uri-style=path repo1-path=/pgbackrest/patroni-demo repo1-retention-full=4 repo1-s3-bucket=<REPO_S3_BUCKET_NAME> repo1-s3-endpoint=<REPO_S3_ENDPOINT> repo1-s3-key=<REPO_S3_ACCESS_KEY> repo1-s3-key-secret=<REPO_S3_KEY_SECRET> repo1-s3-region=default [global:archive-push] compress-level=3
Replace
<REPO_S3_ENDPOINT>
with your Vultr Object Storage hostname,<REPO_S3_BUCKET_NAME>
with the bucket name,<REPO_S3_ACCESS_KEY>
and<REPO_S3_KEY_SECRET>
with your Vultr Object Storage Access Key and Secret Key respectively.Save and close the file.
Edit the
/etc/patroni.yml
file and addcreate_replica_methods
,pgbackrest
,basebackup
under thepostgresql
section as below:postgresql: create_replica_methods: - pgbackrest - basebackup pgbackrest: command: pgbackrest --stanza=demo-cluster restore --type=none keep_data: True no_params: True basebackup: checkpoint: 'fast'
Reload the Patroni service.
$ systemctl reload patroni
On the Patroni primary server, run the following command to edit the Patroni configuration.
$ patronictl -c /etc/patroni.yml edit-config
Add
archive_command
,archive_mode
andrecovery_conf
under thepostgresql
section as below.postgresql: parameters: hot_standby: 'on' max_replication_slots: 8 max_wal_senders: 8 wal_keep_segments: 20 archive_command: pgbackrest --stanza=demo-cluster archive-push "%p" archive_mode: 'on' recovery_conf: recovery_target_timeline: latest restore_command: pgbackrest --stanza=demo-cluster archive-get %f "%p" use_pg_rewind: false use_slots: true retry_timeout: 10 ttl: 30
Reload the Patroni cluster.
$ sudo -u postgres patronictl -c /etc/patroni.yml reload patroni_cluster
Perform a Manual Backup
Get the list of members of the Patroni cluster.
$ sudo -u postgres patronictl -c /etc/patroni.yml list
Connect to the Patroni Leader server over SSH.
$ ssh user@patroni-server-ip
Create pgBackRest Stanza.
$ sudo -u postgres pgbackrest --stanza=demo-cluster stanza-create
Back up with pgBackRest.
$ sudo -u postgres pgbackrest --stanza=demo-cluster backup
Check the backup information.
$ sudo -u postgres pgbackrest info
Schedule Automatic Backups
Run crontab -e
on both Patroni servers and enter the following configurations:
00 01 * * 0 sudo -u pgbackrest --type=full --stanza=demo-cluster backup &> /dev/null
00 01 * * 1-6 sudo -u pgbackrest --type=diff --stanza=demo-cluster backup &> /dev/null
0 */4 * * * sudo -u pgbackrest --type=incr --stanza=demo-cluster backup &> /dev/null
Below are some commands to verify the backup status:
Check the backup information.
$ sudo -u postgres pgbackrest info
Check cron service logs to make sure there are pgBackRest jobs.
$ journalctl -u cron.service -f
View the list of PostgreSQL Write-Ahead Logging (WAL) files.
$ ls /var/lib/postgresql/data/pg_wal/
View the list of WAL archive status files.
$ ls /var/lib/postgresql/data/pg_wal/archive_status/
Restore a Replica using pgBackRest
This section shows how to restore a replica from the backup. This section simulates a disaster situation by removing the data directory.
Get the list of members of the Patroni cluster.
$ sudo -u postgres patronictl -c /etc/patroni.yml list
Connect to the Patroni Replica server over SSH.
$ ssh user@patroni-replica-ip
Stop the
patroni
service.$ sudo systemctl stop patroni
Remove its data directory.
$ sudo rm -rf /var/lib/postgresql/data/*
Start the Patroni service.
$ sudo systemctl start patroni
Check Patroni service logs to make sure there are no errors.
$ sudo journalctl -u patroni.service -f
Query the test table from previous steps.
$ psql -U postgres -c "select * from mydata;"
Maintenance
This section describes how to perform maintenance on each of the members of the cluster.
Perform a Manual Switchover
Check the status of the Patroni cluster. A zero value in
Lag in MB
shows that the replica has the latest data.$ sudo -u postgres patronictl -c /etc/patroni.yml list
Perform the manual switchover.
$ sudo -u postgres patronictl -c /etc/patroni.yml switchover
Reinitialize a Patroni Node
Check the status of the Patroni cluster. A big value in
Lag in MB
shows that the replica doesn't work normally. A common reason is the database can't write updates to disk.$ sudo -u postgres patronictl -c /etc/patroni.yml list
Reinitialize the affected node.
$ sudo -u postgres patronictl -c /etc/patroni.yml list patroni_cluster
Upgrade etcd
Instances
This section describes how to upgrade the plan of each member of the etcd
cluster.
Check the
etcd
cluster status.$ etcdctl endpoint health
Navigate to your Customer Portal and select one
etcd
instance.Click the Settings tab on the server information page.
Select the Change Plan menu.
Select the new plan, then click Upgrade.
Check the
etcd
cluster status and wait until all nodes are healthy.$ etcdctl endpoint health
Repeat the steps for the remaining nodes.
Upgrade Patroni Instances
This section describes how to upgrade the plan of each member of the Patroni cluster.
Get the list of members of the Patroni cluster.
$ sudo -u postgres patronictl -c /etc/patroni.yml list
(Optional) Perform a switchover if you want to upgrade the Patroni leader.
Navigate to your Customer Portal and select one Patroni instance.
Click the Settings tab on the server information page.
Select Change Plan menu.
Select the new plan, then click Upgrade.
Check the Patroni cluster status and wait until all nodes are healthy.
$ sudo -u postgres patronictl -c /etc/patroni.yml list
Repeat the steps for the remaining nodes.
Conclusion
In this article cyou have deployed a high-availability PostgreSQL cluster with Patroni and etcd
. This setup ensures the high availability of your database and protects you from any pontential data loss by streaming data between multiple servers and keeping backups on Vultr Object Storage.