Re: Removing Functionally Dependent GROUP BY Columns - Mailing list pgsql-hackers
From | David Rowley |
---|---|
Subject | Re: Removing Functionally Dependent GROUP BY Columns |
Date | |
Msg-id | CAKJS1f8r3uBQYGVv4MfN-VzKvjmdYVpNiohNJaY0UqzXJ+i7yA@mail.gmail.com Whole thread Raw |
In response to | Re: Removing Functionally Dependent GROUP BY Columns (Marko Tiikkaja <marko@joh.to>) |
Responses |
Re: Removing Functionally Dependent GROUP BY Columns
|
List | pgsql-hackers |
On 1 December 2015 at 17:09, Marko Tiikkaja <marko@joh.to> wrote:
On 2015-12-01 05:00, David Rowley wrote:We already allow a SELECT's target list to contain non-aggregated columns
in a GROUP BY query in cases where the non-aggregated column is
functionally dependent on the GROUP BY clause.
For example a query such as;
SELECT p.product_id,p.description, SUM(s.quantity)
FROM product p
INNER JOIN sale s ON p.product_id = s.product_id
GROUP BY p.product_id;
is perfectly fine in PostgreSQL, as p.description is functionally dependent
on p.product_id (assuming product_id is the PRIMARY KEY of product).
This has come up before (on other forums, at least), and my main concern has been that unlike the case where we go from throwing an error to allowing a query, this has a chance to make the planning of currently legal queries slower. Have you tried to measure the impact of this on queries where there's no runtime gains to be had?
I've performed a series of benchmarks on the following queries:
Test1: explain select id1,id2 from t1 group by id1,id2;
Test2: explain select id from t2 group by id;
Test3: explain select t1.id1,t1.id2 from t2 inner join t1 on t1.id1=t2.id group by t1.id1,t1.id2;
I ran each of these with pgbench for 60 seconds, 3 runs per query. In each case below I've converted the TPS into seconds using the average TPS over the 3 runs.
In summary:
Test1 is the worst case test. It's a very simple query so planning overhead of join searching is non-existent. The fact that there's 2 columns in the GROUP BY means that the fast path cannot be used. I added this as if there's only 1 column in the GROUP BY then there's no point in searching for something to remove.
Average (Sec)
Master 0.0001043117
Patched 0.0001118961
Performance 93.22%
Microseconds of planning overhead 7.5844326722
Test2 is a simple query with a GROUP BY which can fast path due to there being only 1 GROUP BY column.
Average (Sec)
Master 0.000099374448
Patched 0.000099670124
Performance 99.70%
Microseconds of planning overhead 0.2956763193
Average (Sec)
Master 0.0001797165
Patched 0.0001798406
Performance 99.93%
Microseconds of planning overhead 0.1240776236
Test3 results seem a bit strange, I would have expected more of a slowdown. I ran the test again to make sure, and it came back with the same results the 2nd time.
I've attached the spreadsheet that used to collect the results, and also the raw pgbench output.
It seems that the worst case test adds about 7.6 microseconds onto planning time. To get this worse case result I had to add two GROUP BY columns, as having only 1 triggers a fast path as the code knows it can't remove any columns, since there's only 1. A similar fast path also exists which will only lookup the PRIMARY KEY details if there's more than 1 column per relation in the GROUP BY, so for example GROUP BY rel1.col1, rel2.col1 won't lookup any PRIMARY KEY constraint.
Given that the extra code really only does anything if the GROUP BY has 2 or more expressions, are you worried that this will affect too many short and fast to execute queries negatively?
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachment
pgsql-hackers by date: