Should work_mem be stable for a prepared statement? - Mailing list pgsql-hackers

From Jeff Davis
Subject Should work_mem be stable for a prepared statement?
Date
Msg-id 2a2b28e2ea677e3cec2850b2dd38b467bf1291fb.camel@j-davis.com
Whole thread Raw
Responses Re: Should work_mem be stable for a prepared statement?
Re: Should work_mem be stable for a prepared statement?
List pgsql-hackers
As a part of this discussion:

https://www.postgresql.org/message-id/CAJVSvF6s1LgXF6KB2Cz68sHzk%2Bv%2BO_vmwEkaon%3DH8O9VcOr-tQ%40mail.gmail.com

James pointed out something interesting, which is that a prepared
statement enforces the work_mem limit at execution time, which might be
different from the work_mem at the time the statement was prepared.

For instance:

   SET work_mem='1GB';
   PREPARE foo AS ...; -- plans using 1GB limit
   SET work_mem='1MB';
   EXECUTE foo; -- enforces 1MB limit

My first reaction is that it's not right because the costing for the
plan is completely bogus with a different work_mem. It would make more
sense to me if we either (a) enforced work_mem as it was at the time of
planning; or (b) replanned if executed with a different work_mem
(similar to how we replan sometimes with different parameters).

But I'm not sure whether someone might be relying on the existing
behavior?

If we were to implement (a) or (b), we couldn't use the work_mem global
directly, we'd need to save it in the plan, and enforce using the
plan's saved value. But that might be a good change anyway. In theory
we might need to do something similar for hash_mem_multiplier, too.

Regards,
    Jeff Davis




pgsql-hackers by date:

Previous
From: "David G. Johnston"
Date:
Subject: Re: [Doc] Improve hostssl related descriptions and option presentation
Next
From: Laurenz Albe
Date:
Subject: Re: Update docs for UUID data type