Table of Contents
Performanсe tuning should be performed during application development and include accurate choice of hardware (for example, estimating number of CPUs and memory per Shardman cluster node, tuning your storage), OS tuning (for example, tuning swappiness or network-related behavior) and DBMS tuning (choosing efficient configuration). But first of all application should be tested and tuned for distributed DBMS. This includes designing distributed schema (or translating existing schema to distributed one), tuning queries, using connection poolers, caching and even checking performance issues related to possible serialization errors or Shardman node outage. The design of schema should include accurate selection of sharding key, decision, which tables should become global. Usually you select sharding key so that
most of the queries filter out most of sharded tables partitions;
sharded tables are colocated and all joins of sharded tables are equi-joins on sharding key.
These rules allow Shardman to efficiently exclude unused shards from queries and to push down joins to shards where the required data resides.
Each Shardman node operates as a usual DBMS server,
so all standard recommendations for tuning PostgreSQL
for production load remain in place. You should select shared_buffers,
work_mem, effective_cache_size depending
on resources, available to DBMS. Take in mind that if cluster topology is
set to cross, Repfactor
instances run on a single node w. When all cluster nodes are on-line, replicas
shouldn't utilize a lot of CPUs. However, in case of node failure masters for
Repfactor replication groups can become running on one server,
which can create significant load on it. While tuning
max_connections parameter note that each transaction can
initiate n-1 connections, where n is the
number of replication groups in the cluster. In case when Silk
is enabled, it's still true for transactions, containing DML. In case when
Silk is disabled, it's also true for read-only transactions.