Re: Query that took a lot of time in Postgresql when not using trim in order by - Mailing list pgsql-performance

From Blas Pico
Subject Re: Query that took a lot of time in Postgresql when not using trim in order by
Date
Msg-id CANL=RntC0u8H+QaD8WPkpnPPxNr0S0qe45fqkCo-O6XUd8yqYQ@mail.gmail.com
Whole thread Raw
In response to Re: Query that took a lot of time in Postgresql when not using trim in order by  (Evgeniy Shishkin <itparanoia@gmail.com>)
List pgsql-performance
My database version is 9.3 but I have test with 9.4 too with the same result, and I have test changing that parameter without success.
I want to know what does have to do the trim with the different query plans?

2015-11-25 13:35 GMT-03:00 Evgeniy Shishkin <itparanoia@gmail.com>:
> What is your Postgres version?
> Do you have correct statistics on this tables?
> Please show  yours execution plans with buffers i.e. explain (analyze,buffers) ...
>


Fast:

 Sort  (cost=193101.41..195369.80 rows=907357 width=129) (actual time=3828.176..3831.261 rows=43615 loops=1)
   Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, (btrim((dim_cliente.tipocliente)::text))
   Sort Key: (btrim((dim_cliente.tipocliente)::text)), dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome
   Sort Method: quicksort  Memory: 13121kB
   ->  HashAggregate  (cost=91970.52..103312.49 rows=907357 width=129) (actual time=2462.690..2496.729 rows=43615 loops=1)
         Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, btrim((dim_cliente.tipocliente)::text)
         ->  Hash Join  (cost=856.30..80628.56 rows=907357 width=129) (actual time=29.524..1533.880 rows=907357 loops=1)


Slow:

 Group  (cost=170417.48..184027.84 rows=907357 width=129) (actual time=36649.329..37235.158 rows=43615 loops=1)
   Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome
   ->  Sort  (cost=170417.48..172685.88 rows=907357 width=129) (actual time=36649.315..36786.760 rows=907357 loops=1)
         Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome
         Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome
         Sort Method: quicksort  Memory: 265592kB
         ->  Hash Join  (cost=856.30..80628.56 rows=907357 width=129) (actual time=26.719..1593.693 rows=907357 loops=1)


The difference is in the top of plans.
As we see, hashjoin time is practically the same.
But fast plan uses hashagg first and only 43k rows require sorting.
Slow plan dominated by sorting 900k rows.

I wonder if increasing cpu_tuple_cost will help.
As cost difference between two plans is negligible now.
   

pgsql-performance by date:

Previous
From: Evgeniy Shishkin
Date:
Subject: Re: Query that took a lot of time in Postgresql when not using trim in order by
Next
From: Adam Brusselback
Date:
Subject: No index only scan on md5 index