Hi.
I've looked at the third patch more and found some evident issues.
1) While using tuplestore we get too much garbage from tuple conversion,
which was not cleared properly. Tried to fix it, but now we come to the
second problem.
2) While receiving tuples we've already allocated memory in libpq, so it
can be too late to do something. In simple examples I can see libpq
result sets which have several GB size (and likely are not limited by
anything).
I've used the attached script to create some tables.
select create_partitioned_rel('t1', 128, true, 1);
select create_partitioned_rel('t2', 4, true, 100);
insert into t1 select i, pg_read_binary_file('/some/100mb/file') from
generate_series(1,128) i;
insert into t2 select i, pg_read_binary_file('/some/100mb/file') from
generate_series(1,128) i;
And with simple queries like
select sum(length(s)) from TABLE;
we can see that backend can easily consume up to several GBs of RAM.
For now I start thinking we need some form of FETCH, which stops
fetching data based on batch size...
--
Best regards,
Alexander Pyhalov,
Postgres Professional