pg_stat_tmp Subdirectory
This document describes how to specify configuration parameters of the
Postgres Pro server for the maximum performance and
stability when working with 1C solutions. Edit these parameters
in the postgresql.conf configuration file. Note that all given
parameter values are approximate. They can be considered as a starting
point for further fine-tuning.
For fast automatic tuning of Postgres Pro, you can
also use the pgpro_tune utility. It
provides the 1c.tune preset that specifies optimal values
for 1C-specific configuration parameters. For
detailed information about the usage of this utility, see
its documentation.
Increase the maximum number of allowed concurrent connections to the database server using the max_connections configuration parameter. 1C solutions can open a large number of connections even if not all of them are used. Therefore, it is recommended to allow not less than 500 connections. The higher the number of client sessions, the higher the parameter value should be.
-- max_connections = 500..10000
Increase the amount of memory allocated by the database for data caching. To do this, use the shared_buffers configuration parameter. For good performance, it is recommended to set this parameter to at least 25% of the available system RAM.
-- shared_buffers = RAM/4
Increase the maximum amount of memory used for temporary buffers within each database session to ensure that all temporary tables are handled correctly. To do this, use the temp_buffers configuration parameter.
The recommended value should be between 32MB and
256MB. The exact value depends on the amount of
memory on the server and the number of concurrent connections. The
more connections and the smaller the amount of memory, the smaller
the parameter value should be.
-- temp_buffers = 32..256MB
Increase the maximum amount of memory to be used by a single query operation before writing to temporary disk files. To do this, use the work_mem configuration parameter.
When choosing the parameter value, note that a complex query might perform several sort and hash operations at the same time, and each of them can use the specified amount of memory. Also, several running sessions can perform such operations concurrently. Therefore, the total used memory can be many times greater than the specified value. You can fine-tune the parameter value by monitoring the number of temporary files created in the system.
-- work_mem = RAM/32..64 or 32MB..256MB
Increase the maximum amount of memory to be used by maintenance database
operations, such as VACUUM or CREATE INDEX.
To do this, use the maintenance_work_mem
configuration parameter.
Note that only one of these operations can be executed at a time by a
database session. Therefore, you can set this value significantly
larger than work_mem. Larger values can improve
performance for vacuuming and restoring database dumps.
-- maintenance_work_mem = RAM/16..32 or work_mem * 4 or 256MB..4GB
In the case of significant memory fragmentation, specify the following
environment variable in the /etc/systemd/system/postgresql.service
file:
-- Environment = MALLOC_MMAP_THRESHOLD_= 8192
Increase the maximum number of simultaneously open files allowed to each server subprocess. To do this, use the max_files_per_process configuration parameter.
The recommended value is 10000 but it can be increased
depending on the load. The maximum allowed value depends on the operating
system. If the server reaches the specified limit, it starts to open and
close files, which affects the performance. You can monitor open files
using the lsof command.
-- max_files_per_process = 10000
Decrease the delay between activity rounds for the background writer using the bgwriter_delay configuration parameter. Note that a too high parameter value will increase the load on the checkpoint process and the backend processes, while a too low value will result in the full load of one of the cores.
-- bgwriter_delay = 20ms
Increase values for the bgwriter_lru_multiplier and bgwriter_lru_maxpages configuration parameters that control the number of dirty buffers written by the background writer in each round.
-- bgwriter_lru_multiplier = 4.0 -- bgwriter_lru_maxpages = 400
Specify the number of concurrent disk I/O operations that any individual Postgres Pro session attempts to initiate in parallel. To do this, use the effective_io_concurrency configuration parameter. Currently, this parameter affects bitmap heap scans only.
For magnetic drives, a good starting point for this parameter is the number of separate drives comprising a RAID 0 stripe or RAID 1 mirror used for the database. However, too high parameter values will keep the disk array busy, which results in extra CPU overhead.
SSD disks can often process many concurrent requests, so the best value might be in the hundreds.
-- effective_io_concurrency = 2
Set the max_parallel_workers_per_gather
configuration parameter to 0. This disables parallel queries that can
be started by Gather and Gather Merge
nodes. Parallel queries may consume very significantly more resources
than non-parallel queries.
-- max_parallel_workers_per_gather = 0
Leave the fsync configuration parameter set to
on (default). In this case, the Postgres Pro
server tries to make sure that updates are physically written to disk by
issuing the fsync() system calls. This ensures that
the database cluster can recover to a consistent state after an operating
system or hardware crash.
Although disabling the fsync parameter often leads to
a performance benefit, this can result in unrecoverable data corruption in
case of a power failure or system crash.
-- fsync = on
Important note: if your RAID has a cache and works in the write-back mode, make sure that the disk controller cache has a valid battery. Otherwise, data written to the cache will be lost in the case of a power failure, and the Postgres Pro server will be unable to recover the data.
Increase values for the checkpoint_timeout, min_wal_size, and max_wal_size configuration parameters to reduce the frequency of checkpoints. Too frequent checkpoints lead to the significant load on the disk system.
-- checkpoint_timeout = 15min -- min_wal_size = 512MB..4G -- max_wal_size = 2 * min_wal_size
Specify the commit_delay configuration parameter if the average number of transactions per second (TPS) exceeds 1000. This parameter adds a time delay before a WAL flush is initiated and can improve the group commit throughput.
-- commit_delay = 1000
Specify the effective_cache_size configuration parameter to define the planner's assumption about the effective size of the disk cache that is available to a single query.
The query optimizer's performance depends on the amount of allocated RAM. A higher value makes it more likely index scans will be used, while a lower value results in using sequential scans.
-- effective_cache_size = RAM - shared_buffers
Specify the random_page_cost configuration parameter to define the planner's estimate of the cost of a non-sequentially-fetched disk page.
The parameter value is specified relative to the
seq_page_cost parameter, which is set to 1.0
by default. Reducing the random_page_cost value
will cause the system to prefer index scans to sequential scans.
The recommended value depends on the seek time of the disk system.
The smaller the seek time, the smaller the parameter value should be
(but not less than 1.0).
-- random_page_cost = 1.5-2.0 (for RAID) or 1.1-1.3 (for SSD)
Increase the value for the from_collapse_limit
configuration parameter. The planner will merge sub-queries into upper
queries if the resulting FROM list would have no more
than this many items. Note that smaller parameter values reduce planning
time but can lead to less efficient query plans.
-- from_collapse_limit = 20
Increase the value of the join_collapse_limit
configuration parameter. The planner will rewrite explicit JOIN
constructs (except FULL JOINs) into lists of
FROM items whenever a list contains no more than
this many items. Note that smaller parameter values reduce planning
time but can lead to less efficient query plans.
-- join_collapse_limit = 20
Set the enable_sorted_merge_join configuration
parameter to off to disable the use of merge joins
in plans with sorting. 1C solutions do not
work with merge joins, so the planner spends useless time to consider
such joins when searching for an optimal plan. Disabling this parameter
allows the planner to reduce planning time.
-- enable_sorted_merge_join = off
Leave the geqo configuration parameter set to
on (default). This enables the genetic query optimizer
(GEQO) that does query planning using the heuristic
search. GEQO reduces planning time for complex
queries joining many relations but sometimes may produce plans that are
less efficient than plans chosen by the regular exhaustive-search
algorithm.
To manage the use of GEQO, specify the
geqo_threshold configuration parameter.
GEQO will be used to plan queries that involve
at least the specified number of FROM items.
-- geqo = on -- geqo_threshold = 12
Set the jit configuration parameter to
off to disable Just-in-Time (JIT)
compilation. This compilation is beneficial primarily for long-running
CPU-bound queries, such as analytical queries. For short queries,
the added overhead will be higher than the time that
JIT compilation can save.
-- jit = off
Leave the autovacuum configuration parameter
set to on (default) to run the autovacuum launcher
daemon. Note that disabling the autovacuum launcher daemon will lead
to increasing the database size and significant performance degradation.
-- autovacuum = on
Increase the maximum number of autovacuum processes that may be running at the same time using the autovacuum_max_workers configuration parameter. The more write requests are executed in the system, the more autovacuum processes are required.
-- autovacuum_max_workers = CPU cores/4..2 (but no less than 4)
Decrease the minimum delay between autovacuum rounds using the autovacuum_naptime configuration parameter. If a parameter value is too high, there can be not enough time to clean the required tables. This will lead to increasing the database size and significant performance degradation. However, a too low parameter value leads to the useless load.
-- autovacuum_naptime = 20s
Specify the directory in which to create temporary tables and indexes on temporary tables using the temp_tablespaces configuration parameter. Typically, 1C solutions use a lot of temporary tables. To increase the performance when working with these tables, locate this directory on separate fast disks.
Before this, you should first create the tablespace using the
CREATE TABLESPACE
command. If characteristics of the target disks differ from the main disks,
specify the corresponding value for the random_page_cost
parameter in this command.
-- temp_tablespaces = 'tablespace_name'
Set the row_security configuration parameter
to off to raise an error if a query's results are
filtered by a row-level security
policy.
-- row_security = off
Set the skip_temp_rel_lock configuration
parameter to on to skip locking for temporary
relations and indexes on these relations. This improves performance
because 1C solutions frequently use such
relations.
-- skip_temp_rel_lock = on
Increase the value for the max_locks_per_transaction
configuration parameter up to 256. It specifies how many
objects per server process or prepared transaction can be locked at the
same time.
Typically, 1C solutions use a lot of temporary tables.
Every backend process usually contains multiple temporary tables. When closing
a connection, Postgres Pro tries to drop all temporary tables
in a single transaction, so this transaction may use a lot of locks. If
the number of locks exceeds the max_locks_per_transaction
value, the transaction will fail leaving multiple orphaned temporary tables.
-- max_locks_per_transaction = 256
Set the standard_conforming_strings configuration
parameter to off to enable backslash escapes (\)
in all strings.
-- standard_conforming_strings = off
Set the escape_string_warning configuration
parameter to off to switch off the warning about
using the backslash escape symbol.
-- escape_string_warning = off
pg_stat_tmp Subdirectory #
Change the default path to the
PGDATA/pg_stat_tmp
subdirectory to locate it separately from the cluster directory. This
subdirectory contains temporary files for the statistics subsystem. These
files are changed intensively, which creates the significant load on the disk
system. It is recommended to locate this subdirectory in
tmpfs (temporary file storage).