Thread: PostgreSQL partition tables use more private memory
# PostgreSQL partition tables use more private memory
Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way.
- when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;
- it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;
- it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;
- pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.
Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.
I did a test using valgrind in test environment to repeat this scene, the following is the steps.
## 1. env
- RHEL 6.3 X86_64
- PostgreSQL 10.2
## 2. non-partition table sql
drop table tb_part_test cascade;
create table tb_part_test
(
STATIS_DATE int NOT NULL,
ORDER_NUM int DEFAULT NULL,
CMMDTY_CODE varchar(40) default '',
RECEIVE_PLANT varchar(4) DEFAULT '',
RECEIVE_LOCAT varchar(10) DEFAULT '',
SUPPLIER_CODE varchar(20) DEFAULT '',
RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',
c1 varchar(2) DEFAULT '',
c2 varchar(2) DEFAULT '',
c3 varchar(2) DEFAULT '',
c4 varchar(2) DEFAULT '',
c5 varchar(2) DEFAULT '',
c6 varchar(2) DEFAULT '',
c7 varchar(2) DEFAULT '',
c8 varchar(2) DEFAULT '',
c9 varchar(2) DEFAULT '',
c10 varchar(2) DEFAULT '',
c11 varchar(2) DEFAULT '',
c12 varchar(2) DEFAULT '',
c13 varchar(2) DEFAULT '',
c14 varchar(2) DEFAULT '',
c15 varchar(2) DEFAULT '',
c16 varchar(2) DEFAULT '',
c17 varchar(2) DEFAULT '',
c18 varchar(2) DEFAULT '',
c19 varchar(2) DEFAULT '',
c20 varchar(2) DEFAULT '',
c21 varchar(2) DEFAULT '',
c22 varchar(2) DEFAULT '',
c23 varchar(2) DEFAULT '',
c24 varchar(2) DEFAULT ''
);
## 3. partition table sql
drop table tb_part_test cascade;
create table tb_part_test
(
STATIS_DATE int NOT NULL,
ORDER_NUM int DEFAULT NULL,
CMMDTY_CODE varchar(40) default '',
RECEIVE_PLANT varchar(4) DEFAULT '',
RECEIVE_LOCAT varchar(10) DEFAULT '',
SUPPLIER_CODE varchar(20) DEFAULT '',
RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',
c1 varchar(2) DEFAULT '',
c2 varchar(2) DEFAULT '',
c3 varchar(2) DEFAULT '',
c4 varchar(2) DEFAULT '',
c5 varchar(2) DEFAULT '',
c6 varchar(2) DEFAULT '',
c7 varchar(2) DEFAULT '',
c8 varchar(2) DEFAULT '',
c9 varchar(2) DEFAULT '',
c10 varchar(2) DEFAULT '',
c11 varchar(2) DEFAULT '',
c12 varchar(2) DEFAULT '',
c13 varchar(2) DEFAULT '',
c14 varchar(2) DEFAULT '',
c15 varchar(2) DEFAULT '',
c16 varchar(2) DEFAULT '',
c17 varchar(2) DEFAULT '',
c18 varchar(2) DEFAULT '',
c19 varchar(2) DEFAULT '',
c20 varchar(2) DEFAULT '',
c21 varchar(2) DEFAULT '',
c22 varchar(2) DEFAULT '',
c23 varchar(2) DEFAULT '',
c24 varchar(2) DEFAULT ''
)PARTITION BY LIST (STATIS_DATE);
DO $$
DECLARE r record;
BEGIN
FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series( '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd
LOOP
EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION OF tb_part_test FOR VALUES IN (' || r.dt || ')';
END LOOP;
END$$;
## 4. test.sql
copy (select pg_backend_pid()) to '/tmp/test.pid';
update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
## 5. test1.sql(tb_part_test1 is a partition table, and it has the same structure with tb_part_test)
copy (select pg_backend_pid()) to '/tmp/test.pid';
update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
## 6. valgrind command
valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix="%m %p " --log_statement=all --shared_buffers=4GB
## 7. test steps
1. Start pg using valgrind, create non-partition table, run pgbench for 1000s, get 29201\_nonpart\_1000s.log
pgbench -n -T 1000 -r -f test.sql
2. Start pg using valgrind, create partition table, run pgbench for 1000s, get 27064\_part\_1000s.log
pgbench -n -T 1000 -r -f test.sql
3. Start pg using valgrind, create partition table, run pgbench for 2000s, get 864\_part\_2000s.log
pgbench -n -T 2000 -r -f test.sql
4. Start pg using valgrind, create partition table, run pgbench for 1000s, get 16507\_part\_2tb\_1000s.log
pgbench -n -T 1000 -r -f test1.sql
The attachments are valgrind logs. Thanks.
Sincerely,
Marcus Mo
Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way.
- when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;
- it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;
- it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;
- pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.
Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.
I did a test using valgrind in test environment to repeat this scene, the following is the steps.
## 1. env
- RHEL 6.3 X86_64
- PostgreSQL 10.2
## 2. non-partition table sql
drop table tb_part_test cascade;
create table tb_part_test
(
STATIS_DATE int NOT NULL,
ORDER_NUM int DEFAULT NULL,
CMMDTY_CODE varchar(40) default '',
RECEIVE_PLANT varchar(4) DEFAULT '',
RECEIVE_LOCAT varchar(10) DEFAULT '',
SUPPLIER_CODE varchar(20) DEFAULT '',
RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',
c1 varchar(2) DEFAULT '',
c2 varchar(2) DEFAULT '',
c3 varchar(2) DEFAULT '',
c4 varchar(2) DEFAULT '',
c5 varchar(2) DEFAULT '',
c6 varchar(2) DEFAULT '',
c7 varchar(2) DEFAULT '',
c8 varchar(2) DEFAULT '',
c9 varchar(2) DEFAULT '',
c10 varchar(2) DEFAULT '',
c11 varchar(2) DEFAULT '',
c12 varchar(2) DEFAULT '',
c13 varchar(2) DEFAULT '',
c14 varchar(2) DEFAULT '',
c15 varchar(2) DEFAULT '',
c16 varchar(2) DEFAULT '',
c17 varchar(2) DEFAULT '',
c18 varchar(2) DEFAULT '',
c19 varchar(2) DEFAULT '',
c20 varchar(2) DEFAULT '',
c21 varchar(2) DEFAULT '',
c22 varchar(2) DEFAULT '',
c23 varchar(2) DEFAULT '',
c24 varchar(2) DEFAULT ''
);
## 3. partition table sql
drop table tb_part_test cascade;
create table tb_part_test
(
STATIS_DATE int NOT NULL,
ORDER_NUM int DEFAULT NULL,
CMMDTY_CODE varchar(40) default '',
RECEIVE_PLANT varchar(4) DEFAULT '',
RECEIVE_LOCAT varchar(10) DEFAULT '',
SUPPLIER_CODE varchar(20) DEFAULT '',
RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',
c1 varchar(2) DEFAULT '',
c2 varchar(2) DEFAULT '',
c3 varchar(2) DEFAULT '',
c4 varchar(2) DEFAULT '',
c5 varchar(2) DEFAULT '',
c6 varchar(2) DEFAULT '',
c7 varchar(2) DEFAULT '',
c8 varchar(2) DEFAULT '',
c9 varchar(2) DEFAULT '',
c10 varchar(2) DEFAULT '',
c11 varchar(2) DEFAULT '',
c12 varchar(2) DEFAULT '',
c13 varchar(2) DEFAULT '',
c14 varchar(2) DEFAULT '',
c15 varchar(2) DEFAULT '',
c16 varchar(2) DEFAULT '',
c17 varchar(2) DEFAULT '',
c18 varchar(2) DEFAULT '',
c19 varchar(2) DEFAULT '',
c20 varchar(2) DEFAULT '',
c21 varchar(2) DEFAULT '',
c22 varchar(2) DEFAULT '',
c23 varchar(2) DEFAULT '',
c24 varchar(2) DEFAULT ''
)PARTITION BY LIST (STATIS_DATE);
DO $$
DECLARE r record;
BEGIN
FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series( '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd
LOOP
EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION OF tb_part_test FOR VALUES IN (' || r.dt || ')';
END LOOP;
END$$;
## 4. test.sql
copy (select pg_backend_pid()) to '/tmp/test.pid';
update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
## 5. test1.sql(tb_part_test1 is a partition table, and it has the same structure with tb_part_test)
copy (select pg_backend_pid()) to '/tmp/test.pid';
update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
## 6. valgrind command
valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix="%m %p " --log_statement=all --shared_buffers=4GB
## 7. test steps
1. Start pg using valgrind, create non-partition table, run pgbench for 1000s, get 29201\_nonpart\_1000s.log
pgbench -n -T 1000 -r -f test.sql
2. Start pg using valgrind, create partition table, run pgbench for 1000s, get 27064\_part\_1000s.log
pgbench -n -T 1000 -r -f test.sql
3. Start pg using valgrind, create partition table, run pgbench for 2000s, get 864\_part\_2000s.log
pgbench -n -T 2000 -r -f test.sql
4. Start pg using valgrind, create partition table, run pgbench for 1000s, get 16507\_part\_2tb\_1000s.log
pgbench -n -T 1000 -r -f test1.sql
The attachments are valgrind logs. Thanks.
Sincerely,
Marcus Mo
Attachment
Hi
čt 27. 12. 2018 v 11:48 odesílatel 大松 <dasong2410@163.com> napsal:
# PostgreSQL partition tables use more private memory
Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way.
- when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;
PostgreSQL uses process memory for catalog caches. Partitions are like tables - if you use lot of partitions, then you use lot of tables, and you need lot of memory for caches. This caches are dropped when some in system catalog is changed.
- it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;
- it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;
- pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.
It is expected behave - a) glibc memory holds allocated memory inside process to process end, b) when there are not changes in system catalog, then caches are not cleaned.
When you have this issue, then it is necessary to close processes - a pooling software can define "dirty" time, and should be able to close session after this time. Maybe one hour, maybe twenty minutes.
Regards
Pavel
Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.
I did a test using valgrind in test environment to repeat this scene, the following is the steps.
## 1. env
- RHEL 6.3 X86_64
- PostgreSQL 10.2
## 2. non-partition table sql
drop table tb_part_test cascade;
create table tb_part_test
(
STATIS_DATE int NOT NULL,
ORDER_NUM int DEFAULT NULL,
CMMDTY_CODE varchar(40) default '',
RECEIVE_PLANT varchar(4) DEFAULT '',
RECEIVE_LOCAT varchar(10) DEFAULT '',
SUPPLIER_CODE varchar(20) DEFAULT '',
RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',
c1 varchar(2) DEFAULT '',
c2 varchar(2) DEFAULT '',
c3 varchar(2) DEFAULT '',
c4 varchar(2) DEFAULT '',
c5 varchar(2) DEFAULT '',
c6 varchar(2) DEFAULT '',
c7 varchar(2) DEFAULT '',
c8 varchar(2) DEFAULT '',
c9 varchar(2) DEFAULT '',
c10 varchar(2) DEFAULT '',
c11 varchar(2) DEFAULT '',
c12 varchar(2) DEFAULT '',
c13 varchar(2) DEFAULT '',
c14 varchar(2) DEFAULT '',
c15 varchar(2) DEFAULT '',
c16 varchar(2) DEFAULT '',
c17 varchar(2) DEFAULT '',
c18 varchar(2) DEFAULT '',
c19 varchar(2) DEFAULT '',
c20 varchar(2) DEFAULT '',
c21 varchar(2) DEFAULT '',
c22 varchar(2) DEFAULT '',
c23 varchar(2) DEFAULT '',
c24 varchar(2) DEFAULT ''
);
## 3. partition table sql
drop table tb_part_test cascade;
create table tb_part_test
(
STATIS_DATE int NOT NULL,
ORDER_NUM int DEFAULT NULL,
CMMDTY_CODE varchar(40) default '',
RECEIVE_PLANT varchar(4) DEFAULT '',
RECEIVE_LOCAT varchar(10) DEFAULT '',
SUPPLIER_CODE varchar(20) DEFAULT '',
RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',
c1 varchar(2) DEFAULT '',
c2 varchar(2) DEFAULT '',
c3 varchar(2) DEFAULT '',
c4 varchar(2) DEFAULT '',
c5 varchar(2) DEFAULT '',
c6 varchar(2) DEFAULT '',
c7 varchar(2) DEFAULT '',
c8 varchar(2) DEFAULT '',
c9 varchar(2) DEFAULT '',
c10 varchar(2) DEFAULT '',
c11 varchar(2) DEFAULT '',
c12 varchar(2) DEFAULT '',
c13 varchar(2) DEFAULT '',
c14 varchar(2) DEFAULT '',
c15 varchar(2) DEFAULT '',
c16 varchar(2) DEFAULT '',
c17 varchar(2) DEFAULT '',
c18 varchar(2) DEFAULT '',
c19 varchar(2) DEFAULT '',
c20 varchar(2) DEFAULT '',
c21 varchar(2) DEFAULT '',
c22 varchar(2) DEFAULT '',
c23 varchar(2) DEFAULT '',
c24 varchar(2) DEFAULT ''
)PARTITION BY LIST (STATIS_DATE);
DO $$
DECLARE r record;
BEGIN
FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series( '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd
LOOP
EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION OF tb_part_test FOR VALUES IN (' || r.dt || ')';
END LOOP;
END$$;
## 4. test.sql
copy (select pg_backend_pid()) to '/tmp/test.pid';
update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
## 5. test1.sql(tb_part_test1 is a partition table, and it has the same structure with tb_part_test)
copy (select pg_backend_pid()) to '/tmp/test.pid';
update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';
## 6. valgrind command
valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix="%m %p " --log_statement=all --shared_buffers=4GB
## 7. test steps
1. Start pg using valgrind, create non-partition table, run pgbench for 1000s, get 29201\_nonpart\_1000s.log
pgbench -n -T 1000 -r -f test.sql
2. Start pg using valgrind, create partition table, run pgbench for 1000s, get 27064\_part\_1000s.log
pgbench -n -T 1000 -r -f test.sql
3. Start pg using valgrind, create partition table, run pgbench for 2000s, get 864\_part\_2000s.log
pgbench -n -T 2000 -r -f test.sql
4. Start pg using valgrind, create partition table, run pgbench for 1000s, get 16507\_part\_2tb\_1000s.log
pgbench -n -T 1000 -r -f test1.sql
The attachments are valgrind logs. Thanks.
Sincerely,
Marcus Mo
Hi, On 2018/12/27 15:44, 大松 wrote: > # PostgreSQL partition tables use more private memory > > Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it'sa bug or Pg just works in this way. > > - when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR)than normal tables, it could be 10x more; > > - it related to partition and column quantity, the more partitions and columns the partition table has, the more memorythe related process occupies; > > - it also related table quantity refered to dml statments which executed in the process, two tables could double the memory,valgrind log will show you the result; > > - pg process will not release this memory until the process is disconnected, unfortunately our applications use connectionpool that will not release connections. > > Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pgprocesses in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of thephysical memory and swap recently. Other than the problems Pavel mentioned in his email, it's a known problem that PostgreSQL will consume tons of memory if you perform an UPDATE/DELETE on a partitioned table containing many partitions, which is apparently what you're describing. It's something we've been working on to fix. Please see if the patches posted in the following email helps reduce the memory footprint in your case. https://www.postgresql.org/message-id/55bd88c6-f311-2791-0a36-11c693c69753%40lab.ntt.co.jp Thanks, Amit
Thanks you guys, I will test the patches you mentioned, and keep you updated. Thanks, Marcus Sent from my iPhone > On Dec 27, 2018, at 19:28, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote: > > Hi, > >> On 2018/12/27 15:44, 大松 wrote: >> # PostgreSQL partition tables use more private memory >> >> Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it'sa bug or Pg just works in this way. >> >> - when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR)than normal tables, it could be 10x more; >> >> - it related to partition and column quantity, the more partitions and columns the partition table has, the more memorythe related process occupies; >> >> - it also related table quantity refered to dml statments which executed in the process, two tables could double the memory,valgrind log will show you the result; >> >> - pg process will not release this memory until the process is disconnected, unfortunately our applications use connectionpool that will not release connections. >> >> Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pgprocesses in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of thephysical memory and swap recently. > > Other than the problems Pavel mentioned in his email, it's a known problem > that PostgreSQL will consume tons of memory if you perform an > UPDATE/DELETE on a partitioned table containing many partitions, which is > apparently what you're describing. > > It's something we've been working on to fix. Please see if the patches > posted in the following email helps reduce the memory footprint in your case. > > https://www.postgresql.org/message-id/55bd88c6-f311-2791-0a36-11c693c69753%40lab.ntt.co.jp > > Thanks, > Amit