If your site administrator has not set things up in the default
way, you might have some more work to do. For example, if the
database server machine is a remote machine, you will need to set
the PGHOST environment variable to the name of the
database server machine. The environment variable
PGPORT might also have to be set. The bottom line is
this: if you try to start an application program and it complains
that it cannot connect to the database, you should consult your
site administrator or, if that is you, the documentation to make
sure that your environment is properly set up. If you did not
understand the preceding paragraph then read the next section.
Postgres Pro Shardman is composed of several software components:
PostgreSQL 18 DBMS with a set of patches.
Postgres Pro Shardman extension.
Management tools and services, including built-in shard manager to provide high availability.
BiHA extension.
Postgres Pro Shardman stores its configuration in the distributed storage.
The shardmand daemon monitors the cluster configuration and manages BiHA clusters, which are used to guarantee high availability of all shards. The common Postgres Pro Shardman configuration (shardmand) is stored in the distributed storage.
Currently Postgres Pro Shardman packages are available for
Ubuntu 22.04/24.04 (no certified edition)
Debian 12/13 (no certified edition)
Red OS 8 and 7.3 (no certified edition)
Alt 10/11 (no certified edition) and ALT SP Release 10
Astra Linux 1.7/1.8 and s390 (no certified edition)
The table below lists all the available Postgres Pro Shardman packages.
Table 1.1. Postgres Pro Shardman Packages
|
Package |
Description |
|---|---|
|
|
Top-level package that installs and configures Postgres Pro Shardman for server and client systems. Do not use this package for upgrades or migrations. Important
Installing the |
|
|
Top-level package that installs debug symbols for other packages. |
|
|
Standard client applications, such as psql or pg_dump. |
|
|
Debug package. |
|
|
shardmand daemon to manage Postgres Pro Shardman services. |
|
|
Command line utility to manage daemon or tools like shardmanctl. |
|
|
Shared libraries required to deploy client applications, including libpq; runtime libraries for ECPG processor. |
|
|
Debug package |
|
|
Postgres Pro Shardman server and PL/pgSQL server-side programming language. |
|
|
Debug package |
|
|
Additional extensions and programs deployable on database servers. |
|
|
Debug package. |
|
|
Header files and libraries for developing client applications and server extensions. On Debian-based systems,
this package is called |
|
|
Debug package for header files. |
|
|
Server-side programming language based on Perl (see Chapter 45). |
|
|
Debug package. |
|
|
Server-side programming language based on Python 3 (see PL/Python). |
|
|
Debug package. |
|
|
Server-side programming language based on Tcl (see Chapter 44). |
|
|
Debug package. |
|
|
Documentation (English). |
|
|
Documentation (Russian). |
|
|
bihactl version for Golang that shardmanctl and other tools use to manage BiHA. |
|
|
This package provides support for Just-in-Time (JIT) compilation. This package is available only for x86_64 architecture and only for the supported Debian and Ubuntu systems, Astra Linux 1.7/1.8, supported ALT systems. To learn more about enabling and using JIT, see Chapter 29. |
|
|
Debug package. |
|
|
pg_probackup3 utility. It is strictly prohibited to use any pg_probackup3 versions except for the one included in the Postgres Pro Shardman repository. |
|
|
Debug package. |
|
|
pgpro_controldata application to display control information of a PostgreSQL/Postgres Pro Shardman database cluster and compatibility information for a cluster and/or server. |
|
|
pgpro_pwr extension that enables you to generate workload reports, which help to discover most resource-intensive activities in your database. |
Additionally, Postgres Pro Shardman provides separate packages with debug information for some operating systems:
On Debian-based systems, see the postgrespro-sdm-18-dbg package.
On ALT Linux systems, all packages containing binary files have the corresponding
-debuginfo packages.
The minimal configuration consists of 2 Postgres Pro Shardman nodes.
Let’s suppose that we have the following node names and IP addresses:
192.0.1.20 sdm01 - Shardman node1 192.0.1.21 sdm02 - Shardman node2 192.0.1.23 sdm03 - Shardman node3 192.0.1.100 ntp - local time sync server (optional)
Each node has 4Gb RAM, 20GB HDD, 2CPU and Ubuntu 22.04 installed.
/etc/hosts #This step must be performed on all nodes.
sudo /bin/sh -c 'cat << EOF >> /etc/hosts 192.0.1.20 sdm01 192.0.1.21 sdm02 192.0.1.23 sdm03 EOF'
This step must be performed on all nodes.
Deploy and start chrony daemon on all hosts.
sudo apt install -y chrony
By default, chrony gets the time from available servers on internet or the local time server. You can check available time servers as follows:
chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 192.0.1.100 1 6 7 1 -98us[ -98us] +/- 11ms ^* time.cloudflare.com 3 6 7 1 +139us[ +163us] +/- 11ms ^+ tms04.deltatelesystems.ru 1 6 7 1 -381us[ -357us] +/- 17ms
It is desirable to synchronize time with your server or the local server for
the cluster. In this case, it is the ntp server.
To do this, make changes similar to the following to
chrony configuration:
sudo tee "/etc/chrony/chrony.conf" > /dev/null << 'EOF' server 192.0.1.100 iburst keyfile /etc/chrony.keys driftfile /var/lib/chrony/chrony.drift log tracking measurements statistics logdir /var/log/chrony EOF systemctl restart chrony
Check that chrony is connected to the appropriate server.
chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 192.0.1.100 8 6 17 37 +14us[ +70us] +/- 161us chronyc tracking Reference ID : 0A80000C (ntp.local) Stratum : 9 Ref time (UTC) : Wed Nov 15 11:58:52 2023 System time : 0.000000004 seconds slow of NTP time Last offset : -0.000056968 seconds RMS offset : 0.000056968 seconds Frequency : 10.252 ppm fast Residual freq : -2.401 ppm Skew : 364.419 ppm Root delay : 0.000455358 seconds Root dispersion : 0.010503666 seconds Update interval : 2.1 seconds Leap status : Normal
Let’s add a Postgres Pro Shardman repository on each Postgres Pro Shardman node:
Install the following packages:
sudo apt install -y vim curl jq
Run
curl -fsSL -u "<user>:<password>" https://repo.postgrespro.ru/sdm/sdm-18/keys/pgpro-repo-add.sh > pgpro-repo-add.sh | bash chmod +x pgpro-repo-add.sh
Specify your login and password for the pgpro-repo-add.sh
repository.
ex -s -c "%s/REPOUSER=/REPOUSER=<user>/g" -c "wq" "pgpro-repo-add.sh" ex -s -c "%s/PASSWORD=/PASSWORD=<password>/g" -c "wq" "pgpro-repo-add.sh"
Run sudo pgpro-repo-add.sh.
sudo ./pgpro-repo-add.sh
Next step is installation of packages (on each node):
sudo apt update sudo apt install -y postgrespro-sdm-18-server postgrespro-sdm-18-client postgrespro-sdm-18-contrib postgrespro-sdm-18-libs pg-probackup3 sdm-18-services sdm-18-tools biha-utils-sdm-18
Suppose we have chosen a default cluster name of cluster0.
The next step is to put Postgres Pro Shardman environment
vars into the /etc/shardman directory (on each node):
sudo sh -c 'cat << EOF > /etc/shardman/shardmand18-cluster0.env SDM_CLUSTER_NAME=cluster0 SDM_LOG_LEVEL=info EOF'
The file and directory are created with sudo, but later
shardmanctl does not use sudo,
thus cannot access the file with the environment variables. To access it,
either add the variables to the system with export, or
grant user with access rights to the file and the directory.
For your user, add /opt/pgpro/sdm-18/bin to the
PATH environment variable:
echo "export PATH=$PATH:/opt/pgpro/sdm-18/bin" >> ~/.bashrc source ~/.bashrc
Let’s generate a sample configuration with the Postgres Pro Shardman utilities (only on one Postgres Pro Shardman node).
shardmanctl config generate > spec.json
In this step, you can make some changes to the cluster specification (configuration),
i.e., change the password or PostgreSQL shared_buffers parameter and so on.
For a simple cluster change Repfactor to 0. For more details,
see Shardman configuration file.
jq ' .Repfactor = 0 ' "spec.json" > tmp.json && mv tmp.json "spec.json"
Now we have some final steps. First, let's initialize the cluster configuration in the distributed storage (only on one [any] Postgres Pro Shardman node).
shardmanctl init -f spec.json
The expected output is:
2023-04-18T12:30:03.043Z DEBUG cmd/common.go:100 Waiting for metadata lock... 2023-04-18T12:30:03.048Z DEBUG cluster/cluster.go:365 DataDir is not specified, setting to default /var/lib/pgpro/sdm-18/data
Enable and start the shardmand service (on each Postgres Pro Shardmannode):
sudo systemctl enable --now shardmand18@cluster0 sudo systemctl cluster status shardmand18@cluster0
● shardmand18@cluster0.service - deployment daemon for shardman
Loaded: loaded (/lib/systemd/system/shardmand18@.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2023-04-18 12:28:18 UTC; 2min 13s ago
Docs: https://github.com/postgrespro/shardman
Main PID: 618 (shardmand)
Tasks: 10 (limit: 4571)
Memory: 32.0M
CPU: 422ms
CGroup: /system.slice/system-shardmand.slice/shardmand@cluster0.service
└─618 /opt/pgpro/sdm-18/bin/shardmand --cluster-name cluster0 --system-bus --user postgres
The following command must be executed only on one Postgres Pro Shardman node.
This step assumes that all previous steps were executed successfully:
the time on all hosts
is synchronized and the daemon is launched on sdm01, sdm02, and sdm03.
The final step should be executed with shardmanctl command as follows:
shardmanctl nodes add -n sdm01,sdm02,sdm03 \
--cluster-name cluster0 \
--log-level debug
The expected output should be:
2023-04-18T12:43:11.300Z DEBUG cmd/common.go:100 Waiting for metadata lock... 2023-04-18T12:43:11.306Z INFO cluster/store.go:277 Checking if shardmand on all nodes have applied current cluster configuration ✓ Waiting for shardmand on node sdm01 to apply current configuration: success 0.000s ✓ Waiting for shardmand on node sdm02 to apply current configuration: success 0.000s ✓ Waiting for shardmand on node sdm03 to apply current configuration: success 0.000s 2023-04-18T12:43:11.307Z INFO add/case.go:112 Initting Stolon instances... 2023-04-18T12:43:11.312Z INFO add/case.go:170 Waiting for Stolon daemons to start... make sure shardmand daemons are running on the nodes ✓ Waiting for Stolon daemons of rg clover-1-sdm01: success 31.012s ✓ Waiting for Stolon daemons of rg clover-1-sdm02: success 0.012s ✓ Waiting for Stolon daemons of rg clover-1-sdm03: success 0.012s 2023-04-18T12:43:42.336Z INFO add/case.go:187 Adding repgroups... ✓ waiting rg 1 config apply: done 7.014s 2023-04-18T12:43:49.444Z DEBUG broadcaster/worker.go:33 start broadcaster worker for repgroup id=1 2023-04-18T12:43:49.453Z DEBUG broadcaster/worker.go:51 repgroup 1 connect established 2023-04-18T12:43:49.453Z DEBUG commands/addrepgroup.go:575 waiting for extension lock... 2023-04-18T12:43:49.453Z DEBUG commands/addrepgroup.go:137 Loading schema into replication group rg 1 ... 2023-04-18T12:44:25.665Z DEBUG rebalance/service.go:528 wait all tasks finish 2023-04-18T12:44:25.666Z DEBUG broadcaster/worker.go:75 finish broadcaster worker for repgroup id=1 2023-04-18T12:44:25.666Z DEBUG broadcaster/worker.go:75 finish broadcaster worker for repgroup id=2 2023-04-18T12:44:25.666Z INFO add/case.go:221 Successfully added nodes sdm01, sdm02, sdm03 to the cluster
The “Successfully added nodes sdm01, sdm02, sdm03 to the cluster” message means that
everything is fine and nodes sdm01, sdm02, and sdm03 are working properly.
Let's check the status of the cluster nodes.
shardmanctl cluster status postgres@shrn1:/$ shardmanctl cluster status RESTART REQUIRED PARAMS STATUS OK BIHA STATUS OK METADATA STATUS OK SHARDMAND STATUS OK REPLICATION GROUP STATUS OK DICTIONARY STATUS OK KEEPER STATUS OK STORE STATUS OK MASTER STATUS OK
To connect to the cluster we should get the cluster connection string on any
cluster node (sdm01, sdm02, or sdm03):
shardmanctl getconnstr
Output example:
dbname=postgres host=sdm01,sdm02,sdm03 password=!!!CHANGE_ME!!! port=5432,5432,5432 user=postgres
And then let’s try to connect:
psql -d 'dbname=postgres host=sdm01,sdm02,sdm03 password=!!!CHANGE_ME!!! port=5432,5432,5432 user=postgres'
Output example:
psql (18.1) Type "help" for help.