Re: Query much slower when run from postgres function - Mailing list pgsql-performance

From decibel
Subject Re: Query much slower when run from postgres function
Date
Msg-id E6757F02-1F73-4DDE-A5D4-0606CC517B1C@decibel.org
Whole thread Raw
In response to Re: Query much slower when run from postgres function  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Query much slower when run from postgres function
List pgsql-performance
On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:
> fche@redhat.com (Frank Ch. Eigler) writes:
>> For a prepared statement, could the planner produce *several* plans,
>> if it guesses great sensitivity to the parameter values?  Then it
>> could choose amongst them at run time.
>
> We've discussed that in the past.  "Choose at runtime" is a bit more
> easily said than done though --- you can't readily flip between plan
> choices part way through, if you've already emitted some result rows.

True, but what if we planned for both high and low cardinality cases,
assuming that pg_stats indicated both were a possibility? We would
have to store multiple plans for one prepared statement, which
wouldn't work well for more complex queries (if you did high and low
cardinality estimates for each table you'd end up with 2^r plans,
where r is the number of relations), so we'd need a way to cap it
somehow. Of course, whether that's easier than having the ability to
throw out a current result set and start over with a different plan
is up for debate...

On a related note, I wish there was a way to tell plpgsql not to pre-
plan a query. Sure, you can use EXECUTE, but building the query plan
is a serious pain in the rear.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



pgsql-performance by date:

Previous
From: decibel
Date:
Subject: Re: Query much slower when run from postgres function
Next
From: decibel
Date:
Subject: Re: Query performance over a large proportion of data