Thread: does max_connections affect the query planner
Hi We are trying to diagnose why postgres might be making poor decisions regarding query plans. One theory is that it does not assume it has the memory suggested in effective_cache_size. We do know that max_connections is set quite high (600) when we don't really expect more than 100. I wonder does the planner take max_connections x work_mem into account when considering the memory it has potentially available? Regards Bob
Bob Jolliffe <bobjolliffe@gmail.com> writes: > We do know that max_connections is set quite high (600) when we don't > really expect more than 100. I wonder does the planner take > max_connections x work_mem into account when considering the memory it > has potentially available? No. There have been discussions to the effect that it ought to have a more holistic view about available memory; but nothing's been done about that, and certainly no existing release does so. Usually the proximate cause of bad plan choices is bad rowcount estimates --- you can spot that by comparing estimated and actual rowcounts in EXPLAIN ANALYZE results. regards, tom lane
Thanks Tom. Will check that. On Tue, 17 Sep 2019 at 14:13, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > Bob Jolliffe <bobjolliffe@gmail.com> writes: > > We do know that max_connections is set quite high (600) when we don't > > really expect more than 100. I wonder does the planner take > > max_connections x work_mem into account when considering the memory it > > has potentially available? > > No. There have been discussions to the effect that it ought to have > a more holistic view about available memory; but nothing's been done > about that, and certainly no existing release does so. > > Usually the proximate cause of bad plan choices is bad rowcount > estimates --- you can spot that by comparing estimated and actual > rowcounts in EXPLAIN ANALYZE results. > > regards, tom lane
On Tue, Sep 17, 2019 at 4:41 AM Bob Jolliffe <bobjolliffe@gmail.com> wrote:
Hi
We are trying to diagnose why postgres might be making poor decisions
regarding query plans. One theory is that it does not assume it has
the memory suggested in effective_cache_size.
We do know that max_connections is set quite high (600) when we don't
really expect more than 100. I wonder does the planner take
max_connections x work_mem into account when considering the memory it
has potentially available?
No, it doesn't try to guess how many connections might be sharing effective_cache_size. It assumes the entire thing is available to any use at any given time.
But it is only used for cases where a single query is going to be accessing blocks over and over again--it estimates that the block will still be in cache on subsequent visits. But this doesn't work for blocks visited repeatedly in different queries, either on the same connection or different ones. There is no notion that some objects might be hotter than others, other than within one query.
Cheers,
Jeff