Re: Planner selects different execution plans depending on limit - Mailing list pgsql-performance

From Jesper Krogh
Subject Re: Planner selects different execution plans depending on limit
Date
Msg-id 504E2EFE.7060000@krogh.cc
Whole thread Raw
In response to Planner selects different execution plans depending on limit  (bill_martin@freenet.de)
List pgsql-performance
On 10/09/12 16:24, bill_martin@freenet.de wrote:

Hi All

I´ve ft_simple_core_content_content_idx
  ON core_content
  USING gin
  (to_tsvector('simple'::regconfig, content) );

 
If I´m seaching for a word which is NOT in the column content the query plan and the execution time differs with the given limit.
If I choose 3927 or any higher number the query execution took only few milliseconds.
 
core_content content where
to_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true
Limit 3927

"Limit  (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)"
"  ->  Seq Scan on core_content content  (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)"
"        Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)"
"Total runtime: 52147.173 ms"

Is there any posibility to improve the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes only the fast bitmap heap scan instead of the slow seq scan?


The big hammer is: "set enable_seqscan = off", but if you tell which PG version you're on there may be something to do. I suggest you'd start by bumping the statistics target for the column to 10000 and run analyze to see what that changes.

--
Jesper

pgsql-performance by date:

Previous
From: Jeff Janes
Date:
Subject: Re: : PostgreSQL Index behavior
Next
From: Marcos Ortiz
Date:
Subject: Re: Slow Performance on a XEON E5504