E.1. Postgres Pro Enterprise 16.9.1 #

Release Date: 2025-06-11

E.1.1. Overview #

This release is based on PostgreSQL 16.9 and Postgres Pro Enterprise 16.8.3. All changes inherited from PostgreSQL 16.9 are listed in PostgreSQL 16.9 Release Notes. As compared with Postgres Pro Enterprise 16.8.3, this version provides the following changes:

  • Added the experimental feature that allows temporary tables in parallel queries and can be enabled using the new enable_parallel_temptables configuration parameter. This feature cannot be used in production for now. Also, added the new write_page_cost configuration parameter that allows estimating the cost of flushing temporary table pages to disk. It will only work if enable_parallel_temptables is enabled.

  • Added the new planner_upper_limit_estimation configuration parameter that enables the query planner to overestimate the expected number of rows in statements that contain a comparison to an unknown constant.

  • Added the experimental feature that allows using an in-memory system catalog for temporary tables, and can be enabled using the new enable_temp_memory_catalog configuration parameter. This feature cannot be used in production for now.

  • Added a possibility for the autovacuum daemon to process indexes of a table in a parallel mode. The parallel_autovacuum_workers storage parameter of a table controls whether it should be processed in parallel and defines the number of additional parallel autovacuum workers that can be launched for table processing. The number of autovacuum workers per cluster that can be used for parallel table processing is limited by the autovacuum_max_workers configuration parameter. Parallel autovacuum is currently in an experimental phase and is intended for testing purposes only. Do not deploy it in production environments.

  • Implemented the following enhancements and bug fixes for real-time query replanning:

    • Added a possibility to trigger the real-time query replanning for a query that is currently run in the specified session. The replanning request is sent by the replan_signal function.

    • Hid the confusing early terminated notice in EXPLAIN VERBOSE unless replan_enable was on and query replanning was active.

    • Fixed an issue with real-time query replanning failing to optimize queries with subqueries.

  • Added the enable_alternative_sorting_cost_model configuration parameter, which allows you to enable or disable the query planner's use of alternative model for estimating the cost of tuple sorting.

  • Added the enable_any_to_lateral_transformation configuration parameter that allows enabling or disabling transformation of ANY subqueries into LATERAL joins.

  • Implemented performance optimization when working with table metadata, which allows obtaining information about attributes using the system cache instead of direct reading from the system catalog.

  • Introduced the following changes to the implementation of crash_info:

    • Added handling of SIGILL signals to crash_info processing.

    • Added details, like process start time and query text at planning, to crash_info output files.

    • Fixed incorrect function names in the first 2-3 lines of crash_info backtrace stacks.

    • Fixed possible data truncation at the end of SQL query dump files. Previously, a buffer overflow during SQL query dumping could result in incomplete writes, causing truncated data at the end of files produced by crash_info.

    • Fixed handling of crash information signals (sent using the kill command) by backends. Previously, the first signal would flush crash_info information to the log, but the process would continue running without producing a core dump, even if configured, and only the second signal would terminate the backend and generate the core dump as expected.

  • Restricted superuser actions with temporary relations of other sessions. Superusers can now only use the DROP TABLE command with such relations. If the skip_temp_rel_lock configuration parameter is set to on, even dropping such relations is not allowed.

  • Fixed the following issues for CFS:

    • Fixed compatibility of CFS with the ALTER TABLESPACE ... SET/RESET commands.

    • Fixed an issue with memory allocation within a critical section in the implementation of VACUUM ANALYZE for compressed tables.

  • Fixed the following bugs related to autonomous transactions:

    • Fixed a segmentation fault, which could occur when using postgres_fdw with autonomous transactions.

    • Fixed loss of temporary tables from parent transaction on autonomous transaction rollback.

    • Fixed an issue with empty xact_start columns in the pg_stat_activity view for backends executing autonomous transactions.

  • Fixed an issue with missing statistics about vacuuming when multiple index vacuum workers are used.

  • Fixed misuse of spinlocks in the automatic page repair worker.

  • Added the daterange_inclusive extension, which allows you to include the upper bound of a time range in output.

  • Added the pg_failover_slots extension as a separate pre-built package. pg_failover_slots is designed for automatic creation and synchronization of logical replication slots on physical replicas.

  • Added the pg_probackup3 solution for backup and recovery of Postgres Pro database clusters. Refer to the pg_probackup3 release notes for more details.

  • Added the pgpro_bindump module to manage backup and restore operations. This module implements specialized replication commands for an extended replication protocol, has its own format for archiving files, and does not require SSH connection. It is designed specifically for use with the pg_probackup3 utility.

  • Added the pgpro_datactl utility to manage Postgres Pro data files, which includes a module for unpacking and analyzing CFS files.

  • Added the pgpro_result_cache extension to save query results for reuse.

  • Added the new pgpro_tune utility for automatic tuning of Postgres Pro configuration parameters.

  • Upgraded the BiHA solution to version 1.5 to provide the following features, optimizations, and bug fixes:

    • Added the biha.set_sync_standbys function that allows configuring quorum-based synchronous replication on the existing asynchronous BiHA cluster. You can also use the function to change the number of quorum-based synchronous standbys. For more information, see Configuring Quorum-Based Synchronous Replication on the Existing BiHA Cluster.

    • Improved the process of the biha_replication_user password verification during BiHA cluster initialization. If you have preliminary specified the password in the password file, bihactl now gets the password automatically. If you have not specified the password in the password file, you are now prompted to enter the password twice to avoid misspelling.

    • Changed the connection database from postgres to biha_db for the automatic rewind functionality. This helps to avoid automatic rewind errors in cases when the connection string for the postgres database is not present in the password file.

    • Added the LEADER_CHANGE_STARTED callback type that is called on all nodes when the leader is not available and the quorum is ready to start elections or when you set the leader by the biha.set_leader function. You can use this callback to fence the old leader. For more information about the new callback, see Callback Types.

    • Added the --referee-with-postgres-db option of the bihactl add command that allows to copy the postgres database to the referee node. For more information, see The postgres Database on the Referee.

    • Changed the behavior of nodes in the NODE_ERROR state during elections. Such nodes can no longer participate in voting.

    • Expanded the list of cluster parameters returned by the biha.config function. The function now also returns the mode column displaying the operation mode of the node: regular, referee, or referee_with_wal.

    • Fixed an issue with multiple waiting for WAL to become available at... log messages on the referee node in the referee mode.

    • Fixed a bug that caused a segmentation fault when calling biha.set_* functions in a single-node (leader only) BiHA cluster.

    • Fixed an issue of decreasing the max_wal_senders parameter in a single-node (leader only) BiHA cluster. For more information about decreasing max_wal_senders and some other Postgres Pro configuration parameters, see Decreasing Postgres Pro Parameter Values.

    • Fixed a bug where nodes attempted to check the biha.sync_standbys_min value when updating configuration of the asynchronous BiHA cluster.

    • Fixed a bug that allowed to set an empty password for biha_replication_user when executing the init command.

    • Fixed a bug where the old leader in the CSTATE_FORMING state could ignore progressing elections and start as the leader together with the new leader. Now, the old leader does not start as the leader if it detects a node in the CANDIDATE state.

    • Fixed a bug that caused unexpected cluster behavior when the biha.sync_standbys_min value was set outside the valid range by the biha.set_sync_standbys_min function. BiHA now does not allow to modify the biha.sync_standbys_min parameter if the new value is higher than the number of quorum-based synchronous nodes of the synchronous_standby_names parameter.

  • Upgraded citus to version 12.1.7.1.

  • Upgraded dbms_lob to version 1.3 to fix incorrect handling of the amount in write. If buffer is longer, it writes exactly amount bytes (for BLOB) or characters (for CLOB).

  • Upgraded the pg_hint_plan module to ignore hints of the new pgpro_result_cache extension.

  • Added a new PGPRO_TUNE environment variable to initdb, which specifies whether to use pgpro_tune without modifying command-line options directly.

  • Upgraded multimaster to version 1.3.0, which provides the following enhancements and bug fixes:

    • Improved the way nodes delete old synchronization points. Previously, every node deleted only a part of the synchronization point table and replicated changes to other nodes. This could cause synchronization issues and inability to delete old synchronization points. Now, every node deletes all synchronization points and does not replicate changes to other nodes.

    • Fixed potential node hanging during catchup when attempting to process aborted DDL transactions in a cluster of three or more nodes.

  • Upgraded oracle_fdw to increase line size for EXPLAIN output as some Oracle catalog queries have long filter conditions. The line size was increased to the value of 3000.

  • Fixed an issue with updating pageinspect. In rare cases, depending on the sequence of previous updates, upgrading the database cluster and then attempting to update the pageinspect extension using ALTER EXTENSION pageinspect UPDATE TO could result in an error. To avoid such issues, it is strongly recommended to drop and recreate the extension using DROP EXTENSION followed by CREATE EXTENSION after upgrading the cluster. Since pageinspect does not create dependent objects, this approach is safe.

  • Upgraded pg_proaudit to provide the following enhancements and bug fixes:

    • Added new object types: CATALOG RELATION and CATALOG FUNCTION.

    • Added new event fields: UUID, XID, and VXID. Now it is possible to identify an event by its UUID and transaction IDs (if applicable).

    • The behavior of the pg_proaudit.log_catalog_access configuration parameter was corrected to reflect the new logic of logging for system catalog objects.

    • Changed the logic of handling disconnection events. Now these events are associated with the corresponding authentication events, so the disconnection events will be logged even in the case when the rule is removed after authentication, but before disconnect.

    • Fixed an issue where the DISCONNECT event type was not logged for a user who was a member of the role that was set in the logging rule.

    • Fixed the bug that caused a log entry to be written to an incorrect log file when log file rotation was configured.

    • Fixed an issue with pg_proaudit failing to log schema creation events.

    • Corrected the behavior of the logger process when deleting a role from a parallel session configured by the rule.

  • Upgraded pg_probackup to version 2.8.9 Enterprise, which provides the following new features, optimizations, and bug fixes:

    • Added the maintain command to resolve issues that can occur during a forced backup termination.

    • Added the --lock-lifetime option that sets the timeout for locks. This option is useful for computational environments with a slow network.

    • Stabilized retaining the initial permissions for directories when running the init command.

    • Stabilized the checkdb command on a remote host.

    • Improved the stability of the point-in-time recovery (PITR) with validation.

    • Fixed the SignatureDoesNotMatch error that could occur when connecting to the VK Cloud S3 storage.

    • Fixed an incorrect behavior that could occur when launching the wait for a WAL streaming thread in the ARCHIVE WAL delivery mode.

  • Upgraded pgpro_bfile to version 1.3 to add the new bfile_md5() function that calculates a 16-byte MD5 hash for the specified bfile object.

  • Upgraded pgpro_multiplan to version 1.2, which provides the following enhancements and bug fixes:

  • Upgraded pgpro_pwr to version 4.9, which mainly provides optimizations and bug fixes. Notable changes are as follows:

    • Added support for pgpro_stats 1.9.

    • Added a possibility to define the mode of collecting relation sizes globally through the pgpro_pwr.relsize_collect_mode extension parameter or for a server through the set_server_size_sampling function parameter.

    • Enabled fine-tuning the server statistics collection by calling the set_server_setting function. It allows you to define which statistics should be collected.

    • Added a preview of storage parameters for tables and indexes in the Schema object statistics report section.

  • Upgraded pgpro_rp to version 1.3 to prevent related dump/restore failures. Previously, creating the pgpro_rp extension before restoring a dump could lead to a unique constraint violation during restoration of pgpro_rp.

  • Upgraded pgpro_scheduler to version 2.11.2 to fix an issue where repeated jobs could be executed at additional time if days of the week were set using the crontab format. pgpro_scheduler now checks all job time settings and runs jobs at the specified time.

  • Upgraded pgpro_sfile to version 1.5, which provides the following bug fixes and improvements:

    • Added the variant of the sf_find function that searches an sfile object by an ID of the bigint type and to implement the type cast bigint::sfile.

    • Fixed an issue in pgpro_sfile, which could cause errors like tuple concurrently updated during DELETE or TRUNCATE operations.

  • Upgraded pgpro_stats to version 1.9, which provides the following bug fixes and improvements:

    • Enhanced session tracing to provide more information. Specifically, the new time_info filter attribute controls inclusion of additional information in the session-tracing output, and the pgpro_stats.trace_query_text_size configuration parameter can limit the size of the query in the session-tracing output.

    • Aligned the names of the explain_* filter attributes of the session tracer with the names of session-tracing configuration parameters.

    • Changed the format of the statistics dump file and the corresponding save/load routines.

    • Implemented turning off the session tracer functionality when no session-tracing filters are specified.

    • Prohibited inclusion of both pgpro_stats and pg_stat_statements in the list of shared_preload_libraries. If both are included, the database server will not start.

  • Upgraded the pg_wait_sampling extension to provide the following bug fixes:

    • Fixed an issue where GUC variables could be overridden when using parallel workers.

    • Fixed an issue with malformed samples caused by a race condition when pg_wait_sampling.sample_cpu is disabled.

  • Fixed an issue with sr_plan failing to register queries involving the INTERVAL 'const' notation.

  • Removed the --tune option from pg-setup. Use the new pgpro_tune utility instead.

  • Deprecated the experimental vops extension.

E.1.2. Migration to Version 16.9.1 #

If you are upgrading from a Postgres Pro Enterprise release based on the same PostgreSQL major version, it is enough to install the new version into your current installation directory.

Important

To upgrade your BiHA cluster from Postgres Pro Enterprise 16.8 or earlier to Postgres Pro Enterprise 16.9, see migration instructions for BiHA.

ABI versions may change between minor releases of Postgres Pro. If this is the case, and you see the ABI mismatch error when trying to run your extension, make sure to install a new version of the extension supplied with a new release of Postgres Pro, or recompile your third-party extension to be able to use it with the current version of Postgres Pro.

When upgrading your high-availability cluster from Postgres Pro Enterprise versions 16.3.x or lower, first disable automatic failover if it was enabled and upgrade all the standby servers, then upgrade the primary server, promote a standby, and restart the former primary (possibly with pg_rewind).

If you take backups using pg_probackup and you have previously upgraded it to version 2.8.0 Enterprise or 2.8.1 Enterprise, make sure to upgrade it to version 2.8.2 Enterprise or higher and retake a full backup after upgrade, since backups taken using those versions might be corrupted. If you suspect that your backups taken with versions 2.8.0 or 2.8.1 may be corrupted, you can validate them using version 2.8.2.

To migrate from PostgreSQL, as well as Postgres Pro Standard or Postgres Pro Enterprise based on a previous PostgreSQL major version, see the migration instructions for version 16.