Thread: BUG #4264: Optimizer fails to use hash_aggregate when appropriate.
The following bug has been logged online: Bug reference: 4264 Logged by: Scott Carey Email address: scott@richrelevance.com PostgreSQL version: 8.3.3 Operating system: Linux (CentOS) Description: Optimizer fails to use hash_aggregate when appropriate. Details: The query optimizer fails to use a hash aggregate most of the time. This is an inconsistent behavior. On one particular table this is especially painful. This table has 24 million rows, and when aggregating on a column that the optimizer expects only a few unique values, it chooses a full sort of those 24 million rows before a group aggregate, rather than using a hash aggregate that would be 2 to 3 orders of magnitude faster and use less memory. The simple statement of this bug is the following EXPLAIN output and corresponding output from the statistics tables. The actual query used has a more complicated GROUP BY and aggregation (and joins, etc). The condition will occur for any column used to group by. Even one that has only two unique values in a 25 million row table. rr=# explain SELECT count(distinct v_guid) as view_count, p_type FROM p_log.creative_display_logs_012_2008_06_15 GROUP BY p_type; QUERY PLAN ---------------------------------------------------------------------------- ---------------------------------- GroupAggregate (cost=5201495.80..5395385.38 rows=7 width=47) -> Sort (cost=5201495.80..5266125.63 rows=25851932 width=47) Sort Key: p_type -> Seq Scan on creative_display_logs_012_2008_06_15 (cost=0.00..1223383.32 rows=25851932 width=47) rr=# select attname, null_frac, avg_width,n_distinct ,correlation from pg_stats where tablename='creative_display_logs_012_2008_06_15' and attname in ('g_id', 'p_type', 'strat', 'datetime', 'ext_s_id', 't_id'); attname | null_frac | avg_width | n_distinct | correlation ----------------+-----------+-----------+------------+-------------- g_id | 0 | 8 | 14 | 0.221548 p_type | 0 | 4 | 7 | 0.350718 datetime | 0 | 8 | 12584 | 0.977156 ext_s_id | 0.001 | 38 | 11444 | -0.000842848 strat | 0 | 13 | 11 | 0.147418 t_id | 0 | 8 | 2 | 0.998711 (5 rows) I have dumped, dropped, and restored this table twice recently. Both times followed by a full vacuum analyze. And in both cases the query optimizer behaves differently. Additionally, this table is a partition table, and using the inheritance facade instead of the table produces consistently worse plans -- at first the direct-to-table query used the hash aggregate but not the one through inheritance and I thought this was a partitioning bug. But it definitely occurs in general and its reproducibility is affected by partitioning but not dependent on it. The database is tuned with the default optimizer settings for 8.3.3 + constraint exclusion for the partition tables. Yes, hash_agg is ON. It happens sometimes on some tables. The configuration has ample RAM and all the memory tuning parameters are generous (shared_mem 7g, temp space 200m, sort/agg space 500m -- I've tried various settings here with no effect on the plan, just the execution of it w.r.t. disk based sort or mem based sort). The table definition is: Column | Type | Modifiers --------------------+-----------------------------+----------- v_guid | character varying(255) | site_id | bigint | c_id | bigint | item_id | bigint | creative_id | bigint | camp_id | bigint | p_type | integer | datetime | timestamp without time zone | date | date | ext_u_id | character varying(50) | ext_s_id | character varying(50) | u_guid | character varying(50) | strat | character varying(50) | sub_p_type | character varying(32) | exp_id | bigint | t_id | bigint | htmlpi_id | bigint | p_score | double precision | Of course DB hints would solve this. So would some sort of tuning parameter that lets you dial up or down the tendency to do a hash aggregate rather than a full sort followed by a group aggregate. This is broken rather severely, especially in combination with partitions (where it is about 3x as likely to fail to use a hash_aggregate where appropriate in limited experiments so far -- there are a few thousand partition tables). All I want is it to stop being brain-dead and deciding to sort large tables to produce aggregates. In fact, given the rarity in which a sort is preferred over a hash_agg with large tables -- i'd turn off the group aggregate if possible! I have yet to look for a work-around by changing the statistics targets for the table, but I consider this a bug because even with the default sample size, the statistics given should PLAINLY lead to use of a hash_aggregate rather than a full sort followed by group aggregate. The optimizer clearly expects a small number of unique buckets after aggregation, and the number of unique items in the result would have to approach the number of rows in the table for a full sort to make any sense whatsoever. Thanks! -Scott
"Scott Carey" <scott@richrelevance.com> writes: > The query optimizer fails to use a hash aggregate most of the time. This is > an inconsistent behavior. Hash aggregation is not used when a DISTINCT aggregate is specified. DISTINCT forces a sort, so there wouldn't be any advantage. > Of course DB hints would solve this. No, they wouldn't. regards, tom lane
Tom, Thank you for the quick reply. This information was very useful. I apologize as I was not aware of the internal limitations of the hash aggregate implementation and the DISTINCT keyword. I also seem to have mischaracterized this issue in an attempt to reduce the problem to one less specific than my original encounter. One half of this may yet be a bug (or enhancement request). First, the original issue that got me on this track. I'll keep it to the point since it is likely already known (though I could not find it by searching archives). 2. The query planner does not estimate the number of returned rows appropriately when table inheritance / partitioning is involved, leading to poor choices for aggregation. Here are two explains that demonstrate this. The parent table has 0 rows, estimated as 1 row. Ideally it should not affect the query plan. rr=> explain SELECT g_id, p_type, strat from log.creative_display_logs WHERE date='2008-06-15' and s_id=12 GROUP BY g_id, p_type, strat; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------- Group (cost=6488615.49..6747134.82 rows=2585194 width=130) -> Sort (cost=6488615.49..6553245.32 rows=25851933 width=130) Sort Key: log.creative_display_logs.g_id, log.creative_display_logs.p_type, log.creative_display_logs.strat -> Append (cost=0.00..1450170.88 rows=25851933 width=130) -> Seq Scan on creative_display_logs (cost=0.00..10.90 rows=1 width=130) Filter: ((date = '2008-06-15'::date) AND (s_id = 12)) -> Seq Scan on creative_display_logs_12_2008_6_15 creative_display_logs (cost=0.00..1450159.98 rows=25851932 width=28) Filter: ((date = '2008-06-15'::date) AND (s_id = 12)) (8 rows) rr=> explain SELECT g_id, p_type, strat from p_log.creative_display_logs_12_2008_6_15 GROUP BY g_id, p_type, strat; QUERY PLAN ------------------------------------------------------------------------------------------------------ HashAggregate (cost=1514789.81..1514801.57 rows=1176 width=28) -> Seq Scan on creative_display_logs_12_2008_6_15 (cost=0.00..1320900.32 rows=25851932 width=28) (2 rows) As you can see, something about the inheritance changes the expected # of rows dramatically and affects the plan inappropriately. The addition of one scan that returns 0 or 1 row cannot increase the actual output by a couple million. Next: Thanks to your insight, ran some tests and demonstrated the vast difference in query plans resulting from the two queries (on a large table , on a column with few values) of the form: SELECT column from TABLE GROUP BY column; (uses hash_aggregate) SELECT DISTINCT column from TABLE; (full sort, then aggregate) Thus, a query of the form: SELECT count(distinct v_guid) as v_count, p_type FROM table GROUP BY p_type; can be rewritten as the below to work around the query planner limitation and persuade a more efficient query. (Thanks again for highlighting exactly where this limitation is). explain SELECT count(temp.v_guid) as v_count, temp.p_type from (SELECT v_guid, p_type FROM table GROUP BY v_guid, p_type) as temp group by p_type; QUERY PLAN ------------------------------------------------------------------------------------------------------------ HashAggregate (cost=1451905.60..1451908.10 rows=200 width=520) -> HashAggregate (cost=1450159.98..1450858.23 rows=69825 width=50) -> Seq Scan on creative_display_logs_12_2008_6_15 (cost=0.00..1320900.32 rows=25851932 width=50) This obfuscates the original intention of the query and is not an ideal solution, but given the limitations of the planner, and 20% the query runtime, it is a pragmatic necessity. If, the sort implied in the distinct was required, it could be applied separately and also significantly improve the query performance. >DISTINCT forces a sort, so there wouldn't be any advantage. I contend that this is false. Example: Hash_aggregate from 25 million rows to 10000 unique rows, then sort the result. This is faster than sorting 25 million records then group aggregating by about a factor of 5 on the hardware I am using. In fact, it is faster even if the aggregate only reduces the intermediate results by about a factor of 10 (for 25 million rows of the size I'm using -- more rows makes it even more favorable). I contend that SELECT DISTINCT column FROM table; and SELECT column FROM table GROUP BY column ORDER BY column; Should consider such plans. Currently, these both fail to produce optimal queries on large datasets when the aggregation reduces the result size enough. Additionally, the implied sort can be ignored if the columns in a distinct are aggregated, as in the first example, since aggregates are input-order insensitive. Lastly, a more sophisticated hash-aggregate could be multi-tiered and apply to my first example, hashing by the the group by clause and then sub-hashing the column distinct values. No intermediate table would be required in this case, unlike my query butchering above. This combined with the other query plan options would allow for clear, natural - looking queries without hints or hacking up the query into more ugly and less flexible forms as I am required to above (both are mere work-arounds, but work-arounds are necessary stopgaps). Thanks for the clarification on the limitations of the query planner Tom, and the reminder that DISTINCT forces a sort -- I forgot all about that. -Scott Carey On Wed, Jun 25, 2008 at 1:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Scott Carey" <scott@richrelevance.com> writes: > > The query optimizer fails to use a hash aggregate most of the time. This > is > > an inconsistent behavior. > > Hash aggregation is not used when a DISTINCT aggregate is specified. > DISTINCT forces a sort, so there wouldn't be any advantage. > > > Of course DB hints would solve this. > > No, they wouldn't. > > regards, tom lane >
"Scott Carey" <scott@richrelevance.com> writes: > 2. The query planner does not estimate the number of returned rows > appropriately when table inheritance / partitioning is involved, leading to > poor choices for aggregation. Here are two explains that demonstrate this. > The parent table has 0 rows, estimated as 1 row. Ideally it should not > affect the query plan. One extra row is certainly not going to affect that plan materially. It looks to me like the problem is the row *width* estimate --- the parent table is getting what is probably a default estimate, and IIRC the append relation just takes on the widest of the input widths. We could probably do something a bit smarter there; maybe weight the widths according to number of rows in each appended table? The reason this matters is presumably that with the overly wide width, the planner thinks the hash table wouldn't fit in work_mem. > Thanks to your insight, ran some tests and demonstrated the vast difference > in query plans resulting from the two queries (on a large table , on a > column with few values) of the form: > SELECT column from TABLE GROUP BY column; (uses hash_aggregate) > SELECT DISTINCT column from TABLE; (full sort, then aggregate) Yeah, the SELECT DISTINCT code is old and crufty and tightly intertwined with ORDER BY, which means that it's always implemented by sorting, whereas with GROUP BY we can consider both implementations. Sometime we need to rewrite all that; but it's hard to see how to do it without breaking DISTINCT ON. The latter seems like it *needs* to be intertwined with sorting ... >> DISTINCT forces a sort, so there wouldn't be any advantage. > I contend that this is false. Example: Hash_aggregate from 25 million rows > to 10000 unique rows, then sort the result. You need to go back and think again about what it means to have a DISTINCT aggregate (which is unrelated to SELECT DISTINCT, at least from an implementation standpoint) inside a GROUP BY. We could conceivably do it without any sorting if there were a separate hash aggregation table set up for each GROUP BY group, but the odds of running out of memory are much too great for my taste. Estimating the size of the hash tables would depend on guessing how many distinct values of one variable are associated with each value of another variable, and it's hard enough to know even how many distinct values there are overall let alone what the correlation is. I think the only safe way would be to assume that each per-group hash table would reach maximum size, which would probably end up with the planner never choosing the approach at all anyway. regards, tom lane