On Mon, Sep 23, 2024 at 1:46 PM Thomas Munro <thomas.munro@gmail.com> wrote: > 432 bytes
Oh, as Tomas pointed out in the referenced thread,
Thanks for working on it and the detailed explanation. I tested set max_parallel_workers_per_gather = 0 from the original thread and it was working. We are putting that into the application, for our largest customers. Set to 0 before the query then back to 2 after.
Your explanation also shows why rewriting of the query works. I reduced the number of rows being processed much earlier in the query. The query was written with 1 set of many joins which worked on millions of rows then reduced to a handful. I broke this into a materialized CTE that forced Postgres to reduce the rows early then do the joins. Rewriting the query is better regardless of this issue.
I am working on getting a stock Postgres in our production protected enclave with our production database. Probably a full day of work that I need to splice in. We have a similar mechanism in our development environment. Once working I can help test and debug any changes. I can also work on a reproducible example.