Thread: Useless LEFT JOIN breaks MIN/MAX optimization
Hi hackers! My colleague gave me an interesting case related to min max optimization. Adding a useless left join to the select min from t query breaks the min/max read optimization from the index. What is meant is shown in the example below: drop table if exists t1; drop table if exists t2; create table t1 (id int not null, mod text); insert into t1 select id, (id % 10)::text from generate_series(1,100000) id; create unique index on t1(id); create index on t1(mod); This is the best plan for this query, since we only need one minimum value for this index. And it works perfectly: explain select min(mod) from t1; explain select min(mod) from t1; QUERY PLAN ------------------------------------------------------------------------------------------------ Result (cost=0.33..0.34 rows=1 width=32) InitPlan 1 (returns $0) -> Limit (cost=0.29..0.33 rows=1 width=32) -> Index Only Scan using t1_mod_idx on t1 (cost=0.29..3861.54 rows=99500 width=32) Index Cond: (mod IS NOT NULL) (5 rows) create table t2 (id int not null); insert into t2 select id from generate_series(1,100000) id; create unique index on t2(id); But if we add a join, we fall into a sec scan without options: explain select min(t1.mod) from t1 left join t2 on t1.id = t2.id; postgres=# explain select min(t1.mod) from t1 left join t2 on t1.id = t2.id; QUERY PLAN ----------------------------------------------------------------- Aggregate (cost=1693.00..1693.01 rows=1 width=32) -> Seq Scan on t1 (cost=0.00..1443.00 rows=100000 width=32) I have implemented a patch that solves this problem - allowing to consider and join expressions for trial optimization. I am glad for feedback and review! -- Regards, Alena Rybakina Postgres Professional
Attachment
Robert Haas <robertmhaas@gmail.com> writes: > But that's also assuming that you're correct here about how to descend > through a JoinExpr, which I'm not quite sure whether is true. It's > also assuming that we should solve the problem here rather than in > some other part of the code e.g. the join removal code, and I'm not > sure about that either. The actual problem here is that remove_useless_joins hasn't run yet. It's called inside query_planner which happens only after we do preprocess_minmax_aggregates. So I think this patch is a dead end. It's not possible for it to correctly predict whether remove_useless_joins will remove the join, short of repeating all that work which we surely don't want. (I'm a bit surprised that it hasn't visibly broken existing test cases.) It might be possible to move preprocess_minmax_aggregates to happen after join removal, but I fear it'd require some pretty fundamental rethinking of how it generates indexscan paths --- recursively calling query_planner seems dubious. (But maybe that'd work? The modified query should no longer contain aggs, so we wouldn't recurse again.) preprocess_minmax_aggregates is pretty much of a hack anyway. If you read the comments, it's just full of weird stuff that it has to duplicate from other places, or things that magically work because the relevant stuff isn't possible in this query, etc. Maybe it's time to think about nuking it from orbit and doing a fresh implementation in some other place that's a better fit. I have no immediate ideas about what that should look like, other than it'd be better if it happened after join removal. regards, tom lane
On 12.05.2025 14:05, Tom Lane wrote:
Robert Haas <robertmhaas@gmail.com> writes:But that's also assuming that you're correct here about how to descend through a JoinExpr, which I'm not quite sure whether is true. It's also assuming that we should solve the problem here rather than in some other part of the code e.g. the join removal code, and I'm not sure about that either.The actual problem here is that remove_useless_joins hasn't run yet. It's called inside query_planner which happens only after we do preprocess_minmax_aggregates. So I think this patch is a dead end. It's not possible for it to correctly predict whether remove_useless_joins will remove the join, short of repeating all that work which we surely don't want. (I'm a bit surprised that it hasn't visibly broken existing test cases.)
To be honest, I was not completely sure about my decision at first and had no idea how to do it differently, so I submitted a request for "Advanced session feedback" to consider this patch.
I considered another approach using late optimization and ran into a problem where the planner could not find a partitioned table.It might be possible to move preprocess_minmax_aggregates to happen after join removal, but I fear it'd require some pretty fundamental rethinking of how it generates indexscan paths --- recursively calling query_planner seems dubious. (But maybe that'd work? The modified query should no longer contain aggs, so we wouldn't recurse again.)
It was a long time ago to be frankly, but the problem there was that the planner stored this information at a higher level. I can try to finish this.
I attached a diff just in case.
preprocess_minmax_aggregates is pretty much of a hack anyway. If you read the comments, it's just full of weird stuff that it has to duplicate from other places, or things that magically work because the relevant stuff isn't possible in this query, etc. Maybe it's time to think about nuking it from orbit and doing a fresh implementation in some other place that's a better fit. I have no immediate ideas about what that should look like, other than it'd be better if it happened after join removal.
I didn't consider this and I'll think about it.
Thanks for the feedback!
Regards, Alena Rybakina Postgres Professional