Re: pg 9.3 exists subselect with limit brakes query plan - Mailing list pgsql-bugs

From Tom Lane
Subject Re: pg 9.3 exists subselect with limit brakes query plan
Date
Msg-id 14244.1395105508@sss.pgh.pa.us
Whole thread Raw
In response to pg 9.3 exists subselect with limit brakes query plan  (Kószó József <k.joe@freemail.hu>)
List pgsql-bugs
Kószó József <k.joe@freemail.hu> writes:
> We plan to upgrade from PostgreSQL 8.3 to 9.3 and experience some
> planner-related problems in our applications with subselects using
> EXISTS and LIMIT keywords.

I'd drop the LIMIT clauses if I were you.  There once were PG versions
that were too dumb to know that an EXISTS only fetches one row, but that
was a long time ago.  It's unlikely that LIMIT will make things better
except by accident.

The immediate issue here seems to be a variant of the LIMIT-is-hard-to-
predict theme.  On my machine, the subplan for the exists looks like
    ->  Limit  (cost=0.00..0.29 rows=1 width=4) (actual time=63.037..63.037 rows=1 loops=10)          ->  Seq Scan on
pgbench_accountsa  (cost=0.00..28894.00 rows=100000 width=4) (actual time=63.034..63.034 rows=1 loops=10)
Filter: (bid = b.bid)                Rows Removed by Filter: 360115 

while if I set enable_seqscan = off I get
    ->  Limit  (cost=0.42..0.45 rows=1 width=4) (actual time=0.027..0.027 rows=1 loops=10)          ->  Index Only Scan
usingpgbench_accounts_bid on pgbench_accounts a  (cost=0.42..2850.43 rows=100000 width=4) (actual time=0.026..0.026
rows=1loops=10)                Index Cond: (bid = b.bid)                Heap Fetches: 0 

So the seqscan+limit has a marginally lower predicted cost and gets
chosen.  But that predicted cost assumes that the bid values are uniformly
distributed in the table (implying that the seqscan only has to visit
maybe half a dozen rows to find a match).  In this rather artificial test
case, they're exactly sequential, so that for larger bid values, a lot of
rows have to be traversed.

There's been talk of incorporating some kind of risk assessment in plan
costing, which might be able to identify this type of problem and avoid
the unstable plan.  But we don't have it yet.

Another solution you might consider for this particular problem is to
reduce random_page_cost a bit to make the indexscan look cheaper.  But I
don't know if that would help for whatever your production problem is.
        regards, tom lane



pgsql-bugs by date:

Previous
From: Jeff Frost
Date:
Subject: Re: BUG #9606: pg_class relhaspkey column not updated on removal of primary key
Next
From: Tom Lane
Date:
Subject: Re: BUG #9606: pg_class relhaspkey column not updated on removal of primary key