Re: Limit memory usage by postgres_fdw batches - Mailing list pgsql-hackers

From Alexander Pyhalov
Subject Re: Limit memory usage by postgres_fdw batches
Date
Msg-id f4001f334060ea68053be6940415cc6f@postgrespro.ru
Whole thread Raw
In response to Re: Limit memory usage by postgres_fdw batches  (Alexander Pyhalov <a.pyhalov@postgrespro.ru>)
List pgsql-hackers
Hi.

I've looked at the third patch more and found some evident issues.
1) While using tuplestore we get too much garbage from tuple conversion, 
which was not cleared properly. Tried to fix it, but now we come to the 
second problem.

2) While receiving tuples we've already allocated memory in libpq, so it 
can be too late to do something. In simple examples I can see libpq 
result sets which have several GB size  (and likely are not limited by 
anything).

I've used the attached script to create some tables.

select create_partitioned_rel('t1', 128, true, 1);
select create_partitioned_rel('t2', 4, true, 100);

insert into t1 select i, pg_read_binary_file('/some/100mb/file')  from 
generate_series(1,128) i;
insert into t2 select i, pg_read_binary_file('/some/100mb/file')  from 
generate_series(1,128) i;

And with simple queries like

select sum(length(s)) from TABLE;

we can see that backend can easily consume up to several GBs of RAM.

For now I start thinking we need some form of FETCH, which stops 
fetching data based on batch size...
-- 
Best regards,
Alexander Pyhalov,
Postgres Professional
Attachment

pgsql-hackers by date:

Previous
From: Dilip Kumar
Date:
Subject: Re: Proposal: Conflict log history table for Logical Replication
Next
From: Nitin Motiani
Date:
Subject: Re: Adding pg_dump flag for parallel export to pipes