Re: Performance of a Query - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Performance of a Query
Date
Msg-id CAOR=d=0cPhUjHhX4wRtBGX6Fu3NRKfMDi4YdTs1nh6DVKeE+Xw@mail.gmail.com
Whole thread Raw
In response to Performance of a Query  ("Kumar, Virendra" <Virendra.Kumar@guycarp.com>)
Responses RE: Performance of a Query
Re: Performance of a Query
List pgsql-performance
On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra
<Virendra.Kumar@guycarp.com> wrote:
> Hello Gurus,
>
> I am struggling to tune a query which is doing join on top of aggregate for
> around 3 million rows. The plan and SQL is attached to the email.
>
> Below is system Details:
>
> PGSQL version – 10.1
>
> OS – RHEL 3.10.0-693.5.2.el7.x86_64
>
> Binary – Dowloaded from postgres.org compiled and installed.
>
> Hardware – Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.

I uploaded your query plan here: https://explain.depesz.com/s/14r6

The most expensive part is the merge join at the end.

Lines like this one: "Buffers: shared hit=676 read=306596, temp
read=135840 written=135972"

Tell me that your sorts etc are spilling to disk, so the first thing
to try is upping work_mem a bit. Don't go crazy, as it can run your
machine out of memory if you do. but doubling or tripling it and
seeing the effect on the query performance is a good place to start.

The good news is that most of your row estimates are about right, so
the query planner is doing what it can to make the query fast, but I'm
guessing if you get the work_mem high enough it will switch from a
merge join to a hash_join or something more efficient for large
numbers of rows.


pgsql-performance by date:

Previous
From: Jeff Janes
Date:
Subject: Re: Need Help on wal_compression
Next
From: "Kumar, Virendra"
Date:
Subject: RE: Performance of a Query