Thread: BUG #18927: Poor query plan - HashAggregate is more than twice as fast but at a higher cost

The following bug has been logged on the website:

Bug reference:      18927
Logged by:          hongjun xiao
Email address:      xiaohongjun@stu.xidian.edu.cn
PostgreSQL version: 17.4
Operating system:   Linux-Ubuntu 20.04.6 LTS, x86_64-windows10
Description:

performance bug for query: EXPLAIN ANALYZE SELECT t0.c0, t2.c0, t5.c0, t4.c0
FROM ONLY t0, t4 CROSS JOIN ONLY t5 JOIN ONLY t2 ON
lower_inc(((t2.c0)*(t5.c0))) GROUP BY t0.c0, t2.c0, t5.c0, t4.c0;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------
 Group  (cost=332.55..13010.57 rows=135000 width=53) (actual
time=31.799..917.395 rows=184320 loops=1)
   Group Key: t0.c0, t2.c0, t5.c0, t4.c0
   ->  Incremental Sort  (cost=332.55..11660.57 rows=135000 width=53)
(actual time=31.793..841.772 rows=288000 loops=1)
         Sort Key: t0.c0, t2.c0, t5.c0, t4.c0
         Presorted Key: t0.c0
         Full-sort Groups: 30  Sort Method: quicksort  Average Memory: 29kB
Peak Memory: 29kB
         Pre-sorted Groups: 30  Sort Method: quicksort  Average Memory:
1051kB  Peak Memory: 1054kB
         ->  Nested Loop  (cost=0.14..1780.87 rows=135000 width=53) (actual
time=0.038..37.052 rows=288000 loops=1)
               ->  Index Only Scan using i0 on t0  (cost=0.14..12.59 rows=30
width=13) (actual time=0.011..0.233 rows=30 loops=1)
                     Heap Fetches: 30
               ->  Materialize  (cost=0.00..92.03 rows=4500 width=40)
(actual time=0.001..0.444 rows=9600 loops=30)
                     ->  Nested Loop  (cost=0.00..69.53 rows=4500 width=40)
(actual time=0.023..0.933 rows=9600 loops=1)
                           ->  Nested Loop  (cost=0.00..11.97 rows=180
width=27) (actual time=0.019..0.165 rows=384 loops=1)
                                 Join Filter: lower_inc((t2.c0 * t5.c0))
                                 Rows Removed by Join Filter: 156
                                 ->  Seq Scan on t5  (cost=0.00..1.27
rows=27 width=13) (actual time=0.005..0.011 rows=27 loops=1)
                                 ->  Materialize  (cost=0.00..1.30 rows=20
width=14) (actual time=0.000..0.001 rows=20 loops=27)
                                       ->  Seq Scan on t2  (cost=0.00..1.20
rows=20 width=14) (actual time=0.003..0.004 rows=20 loops=1)
                           ->  Materialize  (cost=0.00..1.38 rows=25
width=13) (actual time=0.000..0.001 rows=25 loops=384)
                                 ->  Seq Scan on t4  (cost=0.00..1.25
rows=25 width=13) (actual time=0.003..0.004 rows=25 loops=1)
 Planning Time: 0.090 ms
 Execution Time: 923.298 ms
(22 rows)
database0=# SET enable_incremental_sort = off;
SET
database0=# SET enable_sort = off;
SET
database0=# SET enable_material = off;
SET
database0=# SET enable_indexonlyscan = off;
SET
database0=# EXPLAIN ANALYZE SELECT t0.c0, t2.c0, t5.c0, t4.c0 FROM ONLY t0,
t4 CROSS JOIN ONLY t5 JOIN ONLY t2 ON lower_inc(((t2.c0)*(t5.c0))) GROUP BY
t0.c0, t2.c0, t5.c0, t4.c0;
                                                       QUERY PLAN

------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=22101.58..26088.29 rows=135000 width=53) (actual
time=258.198..332.697 rows=184320 loops=1)
   Group Key: t0.c0, t2.c0, t5.c0, t4.c0
   Planned Partitions: 4  Batches: 5  Memory Usage: 8241kB  Disk Usage:
15896kB
   ->  Nested Loop  (cost=0.00..7504.70 rows=135000 width=53) (actual
time=0.056..81.589 rows=288000 loops=1)
         ->  Nested Loop  (cost=0.00..304.70 rows=4500 width=40) (actual
time=0.048..4.676 rows=9600 loops=1)
               ->  Nested Loop  (cost=0.00..34.70 rows=180 width=27) (actual
time=0.040..0.563 rows=384 loops=1)
                     Join Filter: lower_inc((t2.c0 * t5.c0))
                     Rows Removed by Join Filter: 156
                     ->  Seq Scan on t2  (cost=0.00..1.20 rows=20 width=14)
(actual time=0.020..0.025 rows=20 loops=1)
                     ->  Seq Scan on t5  (cost=0.00..1.27 rows=27 width=13)
(actual time=0.002..0.004 rows=27 loops=20)
               ->  Seq Scan on t4  (cost=0.00..1.25 rows=25 width=13)
(actual time=0.002..0.004 rows=25 loops=384)
         ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13) (actual
time=0.001..0.003 rows=30 loops=9600)
 Planning Time: 0.145 ms
 Execution Time: 338.528 ms
(14 rows)
how to repeat:
CREATE UNLOGGED TABLE IF NOT EXISTS t0(c0 int4range , CHECK((t0.c0) BETWEEN
SYMMETRIC (((t0.c0)-(((((t0.c0)*(t0.c0)))*(t0.c0))))) AND (t0.c0)), PRIMARY
KEY(c0), UNIQUE(c0));
CREATE TEMP TABLE t1(LIKE t0);
CREATE UNLOGGED TABLE IF NOT EXISTS t2(LIKE t1);
CREATE TEMP TABLE IF NOT EXISTS t3(LIKE t0);
CREATE TABLE t4(LIKE t0 EXCLUDING ALL);
CREATE TABLE t5(LIKE t3);
INSERT INTO t2(c0) VALUES('[-979373405,-827042418)'::int4range);
INSERT INTO t0(c0) OVERRIDING SYSTEM VALUE
VALUES('[123385260,774901140]'::int4range);
INSERT INTO t3(c0) VALUES('[-1002793727,580194427]'::int4range);
INSERT INTO t3(c0) OVERRIDING SYSTEM VALUE

VALUES((('[-2028981828,1112447372)'::int4range)*((((((('[1240709335,1860587990)'::int4range)-('(-924711386,1061791334)'::int4range)))*((('[-1495736773,404837981]'::int4range)+('[-1193994893,-231051573)'::int4range)))))-((((('(-1321951921,1203962576]'::int4range)*('(-165643299,2111176310)'::int4range)))-((('[-998035696,205291696]'::int4range)-('[-1492336635,-850603391]'::int4range)))))))));
INSERT INTO t2(c0)

VALUES((((((((('[-1198005819,-615969478]'::int4range)-('[-1798405117,-346794772)'::int4range)))+((('(-81905659,2051769723]'::int4range)-('[-1072898895,-623321637)'::int4range)))))*((('[-590166199,1256715218]'::int4range)*((('[-1812121347,-615969478]'::int4range)-('(759167924,1006926826)'::int4range)))))))+('(1182476365,1465081536]'::int4range)));
INSERT INTO t2(c0) VALUES('[-924851464,-348125579)'::int4range);
INSERT INTO t1(c0) VALUES('[-925625057,1129216791)'::int4range);
INSERT INTO t0(c0) OVERRIDING SYSTEM VALUE
VALUES('[-1045798835,-972466679)'::int4range);
INSERT INTO t5(c0)

VALUES((((((((('[-1531377520,-1058263586)'::int4range)-('(-1613890961,1309069512]'::int4range)))*((('[-1723986894,1902205635]'::int4range)+('[204409272,579032376)'::int4range)))))*((((('(-343592361,951367086]'::int4range)-('(-1233562223,-752932578)'::int4range)))-((('[-1522072191,-452866221]'::int4range)-('[-1652353003,-1226925660)'::int4range)))))))*('(-2091404965,-878781951]'::int4range)));
INSERT INTO t5(c0) VALUES('(-1372068296,-1190231996)'::int4range),
('[-623321637,221720389)'::int4range),
('[863102163,1896880107]'::int4range);
INSERT INTO t2(c0) VALUES('[-1396910396,1853698881)'::int4range);
INSERT INTO t0(c0) VALUES('(-720771717,78172127]'::int4range);
INSERT INTO t1(c0) VALUES('(-1992528256,-30979083]'::int4range);
ANALYZE t1;
INSERT INTO t0(c0) VALUES('[-1784599130,1272062840]'::int4range);
INSERT INTO t4(c0) VALUES('(-1807176872,2059369396]'::int4range);
INSERT INTO t2(c0) OVERRIDING USER VALUE
VALUES('[-1556075147,1784618240)'::int4range);
INSERT INTO t5(c0) VALUES('(134428305,1207889743)'::int4range);
ANALYZE;
INSERT INTO t1(c0) VALUES('(661987836,1873083731]'::int4range);
INSERT INTO t5(c0) VALUES('[-2097591017,1891687801)'::int4range) ON CONFLICT
DO NOTHING;
INSERT INTO t5(c0) VALUES('[-1898655715,-1163084656]'::int4range);
INSERT INTO t0(c0) VALUES('(107127094,1407743052)'::int4range) ON CONFLICT
(c0) DO NOTHING;
INSERT INTO t3(c0) VALUES('[-1121192492,1541080463]'::int4range) ON CONFLICT
DO NOTHING;
INSERT INTO t4(c0) VALUES('(-1643163364,-1134850314]'::int4range);
INSERT INTO t5(c0) VALUES('(-1898655715,-726899860)'::int4range);
INSERT INTO t0(c0) VALUES('(-1514894094,-615732102]'::int4range),
('(-1193994893,557607653)'::int4range),
('(-821947811,632313045]'::int4range);
INSERT INTO t1(c0) OVERRIDING USER VALUE
VALUES('[-1101570680,70326775]'::int4range),
('[-595405737,2103887785)'::int4range),
('[107127094,770430384]'::int4range);
SET parallel_leader_participation=DEFAULT;
INSERT INTO t4(c0) VALUES('[-372318237,1689678853)'::int4range);
INSERT INTO t3(c0)

VALUES((((((((('(-697985112,552823198]'::int4range)*('[-1643793224,-806328482]'::int4range)))*((('[-1854824306,774901140)'::int4range)+('[187401469,247670428)'::int4range)))))-((((('[-614601081,1626733011)'::int4range)*('[362069983,1630419640)'::int4range)))*((('[487070321,1436591472]'::int4range)*('[-821086664,822144273]'::int4range)))))))*('(-1759251092,-1628915594)'::int4range))),
('[-785710687,280594044]'::int4range),
('(-1659339394,1541080463]'::int4range);
INSERT INTO t1(c0) VALUES('[123385260,707223789]'::int4range);
INSERT INTO t2(c0) VALUES('(850897751,1360698350]'::int4range);
RESET ALL;
INSERT INTO t3(c0) VALUES('[-619129693,249555491)'::int4range);
DROP INDEX IF EXISTS i0;
INSERT INTO t1(c0) VALUES('[-519535341,245745133]'::int4range),
('(-979373405,963818596]'::int4range),
('[713899170,2097491924]'::int4range);
INSERT INTO t1(c0) VALUES('[-1424811150,-671959907]'::int4range);
INSERT INTO t0(c0) VALUES('(-66545269,86346114]'::int4range);
INSERT INTO t4(c0) VALUES('(-1351652516,1982739614]'::int4range);
TRUNCATE TABLE t0, t4, t2, t1 CONTINUE IDENTITY;
INSERT INTO t1(c0) VALUES('(-1937582377,2051422598)'::int4range);
INSERT INTO t0(c0) VALUES('[-2012561452,1116470720]'::int4range),
('(-1677013954,1688371627)'::int4range),
('[-568387043,2049285808)'::int4range),
('[86346114,1919126437]'::int4range), ('[556245736,2064189407)'::int4range);
INSERT INTO t2(c0) VALUES('(369749914,645387273)'::int4range);
INSERT INTO t2(c0) VALUES('[-1068162340,1856800078]'::int4range);
INSERT INTO t4(c0) VALUES('(-2077464265,1897426910)'::int4range);
DELETE FROM t3;
INSERT INTO t5(c0)

VALUES((('[665730460,1615260331)'::int4range)-((((((('(-964929443,186588294)'::int4range)-('(-1458933035,-575818858)'::int4range)))-('(488940631,816527748]'::int4range)))+((((('[-1881273824,86298748)'::int4range)+('(-1984524480,880011059)'::int4range)))-((('[396409449,1878786776]'::int4range)-('[-1840645793,632313045)'::int4range)))))))))
ON CONFLICT  DO NOTHING;
INSERT INTO t1(c0) VALUES('[361021601,557607653)'::int4range),
('[632313045,1951765410)'::int4range), ('[582787946,646266345)'::int4range);
INSERT INTO t3(c0) VALUES('[-2104185720,-1271363316]'::int4range);
INSERT INTO t1(c0) VALUES('[-1412303099,-731555305]'::int4range);
INSERT INTO t1(c0) VALUES('[-1091546986,54621731]'::int4range);
REINDEX DATABASE database0;
INSERT INTO t0(c0) VALUES('[-619129693,970052790)'::int4range),
('(-1834439952,1434375565]'::int4range),
('[-420933639,2049285808)'::int4range);
INSERT INTO t5(c0) VALUES('(-458886413,411629419)'::int4range);
INSERT INTO t1(c0)

VALUES((((((((('(-1643163364,1541080463]'::int4range)*('(-1055255073,1498846473)'::int4range)))*((('(-999097411,2049285808)'::int4range)*('(-1784599130,-433109522]'::int4range)))))*((((('[900870407,1668139740]'::int4range)*('(750010521,1568389630]'::int4range)))-((('[-210293842,1309069512]'::int4range)-('(-2077464265,557607653]'::int4range)))))))-('(614898613,829635022)'::int4range)));
INSERT INTO t3(c0) VALUES('(-1449492045,-772967942]'::int4range);
INSERT INTO t4(c0) VALUES('[639775978,1628616541]'::int4range),
('(-1495671780,768040824]'::int4range),
('[638831285,1887073408]'::int4range);
INSERT INTO t2(c0) VALUES('[-773264541,-87565823]'::int4range);
INSERT INTO t1(c0) VALUES('[-1473851785,1453046313]'::int4range);
INSERT INTO t1(c0) VALUES('(-1784599130,1024366887)'::int4range);
INSERT INTO t5(c0) VALUES('(665659522,2064005181]'::int4range);
SET plan_cache_mode=auto;
INSERT INTO t2(c0) VALUES('[-1536834341,1853698881)'::int4range);
INSERT INTO t0(c0) VALUES('(-2041334087,1103447429]'::int4range);
INSERT INTO t5(c0) VALUES('(-1306422444,-240583180)'::int4range);
INSERT INTO t3(c0) VALUES('(-2034024709,-304487723)'::int4range);
INSERT INTO t4(c0) VALUES('(-1697462059,1539856942)'::int4range);
INSERT INTO t3(c0) VALUES('[-703384608,357515583)'::int4range);
INSERT INTO t5(c0) VALUES('(-1040379979,-705381177]'::int4range) ON CONFLICT
DO NOTHING;
DISCARD PLANS;
INSERT INTO t0(c0) VALUES('[-764771491,1563132987)'::int4range);
INSERT INTO t3(c0) VALUES('(-2077464265,-1394006485)'::int4range);
INSERT INTO t2(c0) VALUES('(-1726844647,-585553883]'::int4range);
INSERT INTO t4(c0) VALUES('[582787946,1337878467)'::int4range);
INSERT INTO t2(c0) VALUES('(-1879494410,-590018628)'::int4range);
INSERT INTO t0(c0) VALUES('[1253641993,2047548555]'::int4range),
('(639775978,884423909)'::int4range),
('(-1643163364,107127094]'::int4range);
INSERT INTO t4(c0) VALUES('[-856143783,797304642]'::int4range);
INSERT INTO t2(c0) VALUES('(-511202655,863077415)'::int4range);
CLUSTER ;
INSERT INTO t1(c0) VALUES('[744731477,1212380259]'::int4range);
INSERT INTO t4(c0) VALUES('[-1784599130,-114292301]'::int4range);
INSERT INTO t1(c0) VALUES('[-2136257381,-2119440807)'::int4range);
INSERT INTO t3(c0) VALUES('(-2104185720,17747239)'::int4range);
INSERT INTO t5(c0) VALUES('(-472773019,1309069512]'::int4range);
INSERT INTO t2(c0) VALUES('(1207611868,1296626399)'::int4range);
DELETE FROM ONLY t1;
INSERT INTO t1(c0) VALUES('(-1376276092,1961419975]'::int4range);
INSERT INTO t2(c0) VALUES('[-1338754247,290933880]'::int4range);
INSERT INTO t0(c0) VALUES('(-1691603198,1853323910]'::int4range) ON CONFLICT
(c0) DO NOTHING;
INSERT INTO t5(c0) VALUES('(-1193994893,-209067131)'::int4range);
INSERT INTO t2(c0)

VALUES((((((((('[-1198045670,774901140]'::int4range)-('(-2012561452,957011524)'::int4range)))+((('(1227013922,1669640036)'::int4range)*('(-472773019,907300938]'::int4range)))))+(range_merge((('[-368388073,1984434596]'::int4range)*('(-1421536497,-764540393]'::int4range)),

(('(-1644266655,1541080463)'::int4range)+('[684917787,835580650)'::int4range))))))*('[-1132510054,1639964186]'::int4range)));
DROP INDEX IF EXISTS i0;
INSERT INTO t0(c0) VALUES('(-2055173712,-1169268409]'::int4range);
INSERT INTO t5(c0)

VALUES((((((('[711941166,2123794954]'::int4range)+((('(1582919223,1824352376)'::int4range)-('[-806839381,-752932578)'::int4range)))))-((((('(183257207,1985073341]'::int4range)*('[-1907657084,2049285808)'::int4range)))*((('(-998035696,1486609056]'::int4range)-('[285466377,1668139740)'::int4range)))))))+((((((('[-281514593,1853698881]'::int4range)*('[-1160769987,1718750531]'::int4range)))+((('(-1010266315,-197621957]'::int4range)*('[-595214810,8276637)'::int4range)))))-((((('(-247434796,741426028)'::int4range)+('(-136313886,1996301843)'::int4range)))*((('[1100742967,2125230689)'::int4range)*('(-1475209993,-197621957)'::int4range)))))))));
INSERT INTO t2(c0) VALUES('[-79458420,1439310239)'::int4range);
INSERT INTO t1(c0) VALUES('(-1784599130,184747102)'::int4range);
INSERT INTO t4(c0)

VALUES((('[187401469,1668139740]'::int4range)+(((CAST((('[-206557805,1629354392]'::int4range)-('(-273335182,1853698881)'::int4range))
AS

int4range))+((((('[-1453247481,-380266222]'::int4range)*('(-1020173192,740996274)'::int4range)))+((('[-827042418,963818596)'::int4range)+('(-971522069,1671157103]'::int4range)))))))));
UPDATE t4 SET c0='(-173499208,1298978901]'::int4range WHERE
((t4.c0)>=(((t4.c0)-(((t4.c0)+(((t4.c0)+(t4.c0))))))));
DROP INDEX IF EXISTS i0 CASCADE;
INSERT INTO t0(c0) VALUES('(-997548214,1044534606)'::int4range),
('[-1747550548,-898779535]'::int4range),
('(-1643793224,774901140)'::int4range);
SET SESSION constraint_exclusion=DEFAULT;
INSERT INTO t4(c0) VALUES('(-1126938884,1064954800)'::int4range);
INSERT INTO t4(c0) VALUES('[1123243203,1668139740]'::int4range),
('(-1794973685,1688371627)'::int4range),

((((((((('[-1698542393,-1555720788)'::int4range)-('[-1465882319,1952357874)'::int4range)))-((('(1277202660,1729138223]'::int4range)+('(1260631022,1853305921]'::int4range)))))-((((('(-1290050396,-710286768)'::int4range)*('[150351757,359715703]'::int4range)))+((('[-2059178959,563298829)'::int4range)-('(1354793611,2104747876)'::int4range)))))))*((((((('(-2104185720,-118939925]'::int4range)+('(-1636958256,1582204672)'::int4range)))+((('(815864636,1156998683)'::int4range)-('[111611630,1399702882]'::int4range)))))*((((('(-1811648533,951971787)'::int4range)+('(-1928920059,-1398109791]'::int4range)))+((('(-538621233,461837883]'::int4range)-('[-1008132656,1267307427]'::int4range)))))))));
INSERT INTO t5(c0) VALUES('(-2104185720,-1501159481)'::int4range);
INSERT INTO t0(c0) VALUES('(-1567382249,-610175188]'::int4range);
INSERT INTO t5(c0) VALUES('[-1396536328,895122086]'::int4range);
INSERT INTO t3(c0) VALUES('(-1773267451,-1495736773]'::int4range);
INSERT INTO t4(c0) VALUES('(1024366887,1151519151)'::int4range);
INSERT INTO t1(c0) VALUES('(-1723986894,1541080463)'::int4range) ON CONFLICT
DO NOTHING;
INSERT INTO t1(c0) VALUES('[664478314,1685863778]'::int4range);
INSERT INTO t4(c0) VALUES('(-1611643012,1071839807]'::int4range);
INSERT INTO t1(c0) VALUES('[1753630957,1946129876)'::int4range);
INSERT INTO t1(c0) VALUES('[556245736,1627344901)'::int4range);
INSERT INTO t2(c0) VALUES('(-145743442,1485812196]'::int4range);
INSERT INTO t3(c0) OVERRIDING USER VALUE
VALUES('(-710286768,1985548767]'::int4range);
INSERT INTO t2(c0) VALUES('[-1502467946,867735347]'::int4range);
INSERT INTO t2(c0) VALUES('[-1996401094,481217742)'::int4range);
INSERT INTO t4(c0) VALUES('(-1477950566,582787946]'::int4range);
INSERT INTO t3(c0)

VALUES((((((((('[-165643299,937263328]'::int4range)+('(-1117798284,211530234]'::int4range)))-((('(1335790936,1812582225)'::int4range)-('(398971941,963818596]'::int4range)))))-((((('(1361731962,1524571911)'::int4range)*('(235548287,976024913)'::int4range)))*((('[-830951167,187401469)'::int4range)*('[-1244649985,-1018174588]'::int4range)))))))*('[-1193994893,223134987)'::int4range)));
INSERT INTO t4(c0)

VALUES((((('[913066231,1138745091]'::int4range)+((((('[-1268674209,1649364745)'::int4range)*('[-452866221,815864636)'::int4range)))*((('(-386727812,553227045)'::int4range)-('(-1677013954,1939472994)'::int4range)))))))*((((((('[-1536834341,-710286768)'::int4range)*('(1156998683,1541080463]'::int4range)))*((('[-619702657,-326236408)'::int4range)*('(-1677013954,150351757)'::int4range)))))-((((('(-1448708895,1577211167]'::int4range)*('(-735328858,1076390912]'::int4range)))-((('[-1550402720,-947831128)'::int4range)+('(-1628915594,-814962259]'::int4range)))))))));
INSERT INTO t4(c0) VALUES('[123385260,2104747876]'::int4range);
INSERT INTO t0(c0) VALUES('[-1210942021,1953269847]'::int4range);
INSERT INTO t2(c0) OVERRIDING USER VALUE
VALUES('(-2119312129,-185292395]'::int4range);
INSERT INTO t3(c0) VALUES('(-1137853798,1535597040]'::int4range),
('[-1380685322,-998035696)'::int4range),
('[-582495079,963818596)'::int4range);
INSERT INTO t1(c0) VALUES('(-1686769172,-675118787)'::int4range);
INSERT INTO t0(c0) VALUES('[-850824745,1675189374)'::int4range);
INSERT INTO t4(c0) VALUES('(633441381,1821723840)'::int4range);
INSERT INTO t0(c0) VALUES('[-794882045,970062163)'::int4range),
('[1543390181,1786414732)'::int4range),
('[-1967163718,842681849)'::int4range);
SET SESSION wal_compression=0;
CREATE UNIQUE INDEX i0 ON t0(c0 ASC);
INSERT INTO t1(c0) VALUES('(-1982945675,-1823817600]'::int4range);
INSERT INTO t0(c0)

VALUES((((((((('(-812696819,114525750)'::int4range)-('(157132384,361021601]'::int4range)))*((('(-1244649985,-708888892]'::int4range)-('[-2077464265,2003792374)'::int4range)))))+(((range_merge('(-1177516669,-883200622]'::int4range,

'(-1076046908,779833015)'::int4range))-((('(604229906,1682555894)'::int4range)+('(-165643299,1541080463)'::int4range)))))))*('(-156469532,258122506]'::int4range)));
INSERT INTO t5(c0) VALUES('[-1171886130,357515583)'::int4range);
INSERT INTO t0(c0) VALUES('[-785640820,630434055)'::int4range);
INSERT INTO t0(c0) VALUES('[-939568875,605376690]'::int4range) ON CONFLICT
(c0) DO NOTHING;
INSERT INTO t4(c0) VALUES('(-2115902004,2074593391]'::int4range);
DELETE FROM ONLY t3;
INSERT INTO t1(c0) VALUES('[-1570280729,-591402740]'::int4range);
INSERT INTO t4(c0) VALUES('(-1898655715,-827042418]'::int4range);
CLUSTER ;
INSERT INTO t2(c0) VALUES('(-1784714629,584388455]'::int4range);
INSERT INTO t5(c0) VALUES('(82120606,556245736)'::int4range);
INSERT INTO t5(c0) VALUES('[-1794513037,911333688)'::int4range);
INSERT INTO t5(c0) OVERRIDING SYSTEM VALUE
VALUES('[634312603,1891090785]'::int4range);
INSERT INTO t1(c0) OVERRIDING USER VALUE
VALUES('[-1163084656,1668139740]'::int4range);
UNLISTEN test;
INSERT INTO t4(c0) VALUES('[-1569075228,1156998683]'::int4range);
DISCARD ALL;
INSERT INTO t2(c0) VALUES('[-1591470897,170655769]'::int4range);
INSERT INTO t2(c0) VALUES('(-1448708895,1970595653)'::int4range) ON CONFLICT
DO NOTHING;
INSERT INTO t4(c0) VALUES('(-312561184,489604911)'::int4range);
INSERT INTO t5(c0) VALUES('[-1193057409,1156998683)'::int4range);
INSERT INTO t5(c0) VALUES('[-351265378,2129590335)'::int4range);
VACUUM (FULL 1) t2;
INSERT INTO t5(c0) VALUES('(-1137853798,-466895609]'::int4range);
INSERT INTO t0(c0)

VALUES(((range_merge((((('(-666217871,-1)'::int4range)+('[-1803488029,519906456)'::int4range)))*((('(-1659339394,-1628915594]'::int4range)-('(-827042418,1083854998)'::int4range)))),

(((('[357515583,942582759)'::int4range)-('[-1656230032,1993643269]'::int4range)))*((('[-623321637,-619129693]'::int4range)*('(-1106300321,-472773019)'::int4range))))))+((((((('(-1396910396,747841599]'::int4range)-('[181851906,760705422]'::int4range)))*((('[-1547889764,-1228109412]'::int4range)-('[-1816624680,-1639803065]'::int4range)))))*((((('[-1193994893,-752932578]'::int4range)-('[-1693303383,107127094)'::int4range)))-((('[-1738172067,1435419618)'::int4range)-('[-1759413690,-563571559]'::int4range)))))))));
INSERT INTO t5(c0) VALUES('[1008238289,1309069512)'::int4range);
DROP INDEX IF EXISTS i2;
COMMENT ON TABLE t2 IS '-752932578';
INSERT INTO t2(c0) VALUES('[-1811648533,1053153100]'::int4range);
INSERT INTO t5(c0) VALUES('(-562367693,1200950607]'::int4range);
INSERT INTO t2(c0) VALUES('[-1784599130,-1144745102)'::int4range);
INSERT INTO t4(c0) VALUES('(-192232177,582787946)'::int4range);
INSERT INTO t0(c0)

VALUES((((((((('(-165643299,308834136]'::int4range)-('(-1137853798,-843363855)'::int4range)))-('[-1989648764,0]'::int4range)))*((((('(-1470253257,-477977020)'::int4range)+('[-1424811150,782052214]'::int4range)))-((('[-668690854,280441078]'::int4range)-('(-720771717,1309069512)'::int4range)))))))+('(-41805165,556245736]'::int4range)));
INSERT INTO t0(c0) VALUES('[-1759413690,-1152716765)'::int4range);
RESET ROLE;
INSERT INTO t4(c0) OVERRIDING SYSTEM VALUE
VALUES('[-171431126,999437311)'::int4range);
COMMIT;
ANALYZE t0;
ANALYZE t2;
ANALYZE t4;
ANALYZE t5;


On Wed, May 14, 2025 at 5:55 PM 萧鸿骏 <23031212454@stu.xidian.edu.cn> wrote:
>
> The following method can also trigger a better plan:
>
> database0=# set hash_mem_multiplier = 4;
> SET
> database0=# EXPLAIN ANALYZE SELECT t0.c0, t2.c0, t5.c0, t4.c0 FROM ONLY t0, t4 CROSS JOIN ONLY t5 JOIN ONLY t2 ON
lower_inc(((t2.c0)*(t5.c0)))GROUP BY t0.c0, t2.c0, t5.c0, t4.c0; 
>                                                        QUERY PLAN
>
------------------------------------------------------------------------------------------------------------------------
>  HashAggregate  (cost=8854.70..10204.70 rows=135000 width=53) (actual time=196.753..284.803 rows=184320 loops=1)
>    Group Key: t0.c0, t2.c0, t5.c0, t4.c0
>    Batches: 5  Memory Usage: 16433kB  Disk Usage: 11328kB
>    ->  Nested Loop  (cost=0.00..7504.70 rows=135000 width=53) (actual time=0.046..52.982 rows=288000 loops=1)
>          ->  Nested Loop  (cost=0.00..304.70 rows=4500 width=40) (actual time=0.039..3.358 rows=9600 loops=1)
>                ->  Nested Loop  (cost=0.00..34.70 rows=180 width=27) (actual time=0.032..0.444 rows=384 loops=1)
>                      Join Filter: lower_inc((t2.c0 * t5.c0))
>                      Rows Removed by Join Filter: 156
>                      ->  Seq Scan on t2  (cost=0.00..1.20 rows=20 width=14) (actual time=0.017..0.022 rows=20
loops=1)
>                      ->  Seq Scan on t5  (cost=0.00..1.27 rows=27 width=13) (actual time=0.002..0.004 rows=27
loops=20)
>                ->  Seq Scan on t4  (cost=0.00..1.25 rows=25 width=13) (actual time=0.001..0.004 rows=25 loops=384)
>          ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13) (actual time=0.001..0.002 rows=30 loops=9600)
>  Planning Time: 0.109 ms
>  Execution Time: 291.546 ms
> (14 rows)

By setting a higher value for hash_mem_multiplier, you're essentially
informing the planner that more memory is available for hash-based
operations. During cost estimation, the planner compares the costs of
HashAggregate versus GroupAggregate. With a higher memory limit for
building the hash table, the planner determines that HashAggregate is
more cost-effective and therefore selects it over GroupAggregate.

However, during further investigation, I hadn't increased
hash_mem_multiplier. Instead, I forced the use of hash aggregation by
disabling other options with SET enable_incremental_sort = off; and
SET enable_sort = off;. Even without adjusting hash_mem_multiplier,
the HashAggregate was significantly cheaper than GroupAggregate.  Are
these points that we are overestimating the cost of the hash
aggregate?  Not sure, might need to dig further down, but one thing to
note is that the planner cost shows a hash aggregate with 2x costlier
than the group aggregate[1][2], whereas the actual execution shows
that the hash aggregate is 4x faster than the group aggregate.

Another point to be noted is that as soon as we set the
hash_mem_multiplier  to 4, then the HashAggregate planner cost drops
from 20k to 4k [3], then I suspect this could be related to we are
overestimating for the disk access as our default random page cost is
quite high and might not be suited well for the faster disk, and as
soon as I changed the random_page_cost to 1 it chose HashAggregate by
default.

Summary: IMHO, this doesn't look like an issue, instead, we just need
to adjust the random page cost, which is more suited for the system
under test.

[1]
Explain, analyze with grouping, aggregate:
 Group  (cost=332.55..13010.57 rows=135000 width=53) (actual
time=104.887..2426.446 rows=184320.00 loops=1)
   Group Key: t0.c0, t2.c0, t5.c0, t4.c0
   Buffers: shared hit=8
   ->  Incremental Sort  (cost=332.55..11660.57 rows=135000 width=53)
(actual time=104.884..2222.501 rows=288000.00 loops=1)
         Sort Key: t0.c0, t2.c0, t5.c0, t4.c0
         Presorted Key: t0.c0
         Full-sort Groups: 30  Sort Method: quicksort  Average Memory:
29kB  Peak Memory: 29kB
         Pre-sorted Groups: 30  Sort Method: quicksort  Average
Memory: 1051kB  Peak Memory: 1054kB
         Buffers: shared hit=8
         ->  Nested Loop  (cost=0.14..1780.87 rows=135000 width=53)
(actual time=0.223..77.066 rows=288000.00 loops=1)
               Buffers: shared hit=5
               ->  Index Only Scan using i0 on t0  (cost=0.14..12.59
rows=30 width=13) (actual time=0.075..0.217 rows=30.00 loops=1)
                     Heap Fetches: 30
                     Index Searches: 1
                     Buffers: shared hit=2
               ->  Materialize  (cost=0.00..92.03 rows=4500 width=40)
(actual time=0.005..0.819 rows=9600.00 loops=30)
                     Storage: Memory  Maximum Storage: 873kB
                     Buffers: shared hit=3
                     ->  Nested Loop  (cost=0.00..69.53 rows=4500
width=40) (actual time=0.132..4.110 rows=9600.00 loops=1)
                           Buffers: shared hit=3
                           ->  Nested Loop  (cost=0.00..11.97 rows=180
width=27) (actual time=0.108..0.812 rows=384.00 loops=1)
                                 Join Filter: lower_inc((t2.c0 * t5.c0))
                                 Rows Removed by Join Filter: 156
                                 Buffers: shared hit=2
                                 ->  Seq Scan on t5  (cost=0.00..1.27
rows=27 width=13) (actual time=0.026..0.034 rows=27.00 loops=1)
                                       Buffers: shared hit=1
                                 ->  Materialize  (cost=0.00..1.30
rows=20 width=14) (actual time=0.001..0.003 rows=20.00 loops=27)
                                       Storage: Memory  Maximum Storage: 17kB
                                       Buffers: shared hit=1
                                       ->  Seq Scan on t2
(cost=0.00..1.20 rows=20 width=14) (actual time=0.009..0.013
rows=20.00 loops=1)
                                             Buffers: shared hit=1
                           ->  Materialize  (cost=0.00..1.38 rows=25
width=13) (actual time=0.000..0.002 rows=25.00 loops=384)
                                 Storage: Memory  Maximum Storage: 18kB
                                 Buffers: shared hit=1
                                 ->  Seq Scan on t4  (cost=0.00..1.25
rows=25 width=13) (actual time=0.019..0.023 rows=25.00 loops=1)
                                       Buffers: shared hit=1
 Planning:
   Buffers: shared hit=104
 Planning Time: 2.162 ms
 Execution Time: 2449.420 ms

[2]
Explain, analyze with Hash aggregate: (SET enable_incremental_sort =
off; and SET enable_sort = off;)

HashAggregate  (cost=16355.28..20342.00 rows=135000 width=53) (actual
time=485.060..690.215 rows=184320.00 loops=1)
   Group Key: t0.c0, t2.c0, t5.c0, t4.c0
   Planned Partitions: 4  Batches: 5  Memory Usage: 8249kB  Disk Usage: 15568kB
   Buffers: shared hit=4, temp read=1739 written=3234
   ->  Nested Loop  (cost=0.00..1758.41 rows=135000 width=53) (actual
time=0.191..56.042 rows=288000.00 loops=1)
         Buffers: shared hit=4
         ->  Nested Loop  (cost=0.00..69.53 rows=4500 width=40)
(actual time=0.164..2.533 rows=9600.00 loops=1)
               Buffers: shared hit=3
               ->  Nested Loop  (cost=0.00..11.97 rows=180 width=27)
(actual time=0.127..0.602 rows=384.00 loops=1)
                     Join Filter: lower_inc((t2.c0 * t5.c0))
                     Rows Removed by Join Filter: 156
                     Buffers: shared hit=2
                     ->  Seq Scan on t5  (cost=0.00..1.27 rows=27
width=13) (actual time=0.035..0.056 rows=27.00 loops=1)
                           Buffers: shared hit=1
                     ->  Materialize  (cost=0.00..1.30 rows=20
width=14) (actual time=0.001..0.003 rows=20.00 loops=27)
                           Storage: Memory  Maximum Storage: 17kB
                           Buffers: shared hit=1
                           ->  Seq Scan on t2  (cost=0.00..1.20
rows=20 width=14) (actual time=0.011..0.015 rows=20.00 loops=1)
                                 Buffers: shared hit=1
               ->  Materialize  (cost=0.00..1.38 rows=25 width=13)
(actual time=0.000..0.001 rows=25.00 loops=384)
                     Storage: Memory  Maximum Storage: 18kB
                     Buffers: shared hit=1
                     ->  Seq Scan on t4  (cost=0.00..1.25 rows=25
width=13) (actual time=0.026..0.031 rows=25.00 loops=1)
                           Buffers: shared hit=1
         ->  Materialize  (cost=0.00..1.45 rows=30 width=13) (actual
time=0.000..0.001 rows=30.00 loops=9600)
               Storage: Memory  Maximum Storage: 18kB
               Buffers: shared hit=1
               ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13)
(actual time=0.019..0.024 rows=30.00 loops=1)
                     Buffers: shared hit=1
 Planning:
   Buffers: shared hit=103
 Planning Time: 2.129 ms
 Execution Time: 698.681 ms

[3]
 HashAggregate  (cost=3108.41..4458.41 rows=135000 width=53) (actual
time=489.975..612.891 rows=184320.00 loops=1)
   Group Key: t0.c0, t2.c0, t5.c0, t4.c0
   Batches: 5  Memory Usage: 16441kB  Disk Usage: 7504kB
   Buffers: shared hit=4, temp read=790 written=1523
   ->  Nested Loop  (cost=0.00..1758.41 rows=135000 width=53) (actual
time=0.334..55.897 rows=288000.00 loops=1)
         Buffers: shared hit=4
         ->  Nested Loop  (cost=0.00..69.53 rows=4500 width=40)
(actual time=0.302..2.675 rows=9600.00 loops=1)
               Buffers: shared hit=3
               ->  Nested Loop  (cost=0.00..11.97 rows=180 width=27)
(actual time=0.254..0.699 rows=384.00 loops=1)
                     Join Filter: lower_inc((t2.c0 * t5.c0))
                     Rows Removed by Join Filter: 156
                     Buffers: shared hit=2
                     ->  Seq Scan on t5  (cost=0.00..1.27 rows=27
width=13) (actual time=0.111..0.121 rows=27.00 loops=1)
                           Buffers: shared hit=1
                     ->  Materialize  (cost=0.00..1.30 rows=20
width=14) (actual time=0.002..0.004 rows=20.00 loops=27)
                           Storage: Memory  Maximum Storage: 17kB
                           Buffers: shared hit=1
                           ->  Seq Scan on t2  (cost=0.00..1.20
rows=20 width=14) (actual time=0.033..0.039 rows=20.00 loops=1)
                                 Buffers: shared hit=1
               ->  Materialize  (cost=0.00..1.38 rows=25 width=13)
(actual time=0.000..0.001 rows=25.00 loops=384)
                     Storage: Memory  Maximum Storage: 18kB
                     Buffers: shared hit=1
                     ->  Seq Scan on t4  (cost=0.00..1.25 rows=25
width=13) (actual time=0.035..0.053 rows=25.00 loops=1)
                           Buffers: shared hit=1
         ->  Materialize  (cost=0.00..1.45 rows=30 width=13) (actual
time=0.000..0.001 rows=30.00 loops=9600)
               Storage: Memory  Maximum Storage: 18kB
               Buffers: shared hit=1
               ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13)
(actual time=0.023..0.030 rows=30.00 loops=1)
                     Buffers: shared hit=1
 Planning Time: 0.763 ms
 Execution Time: 622.127 ms
(31 rows)


--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Thank you very much for your reply! May I assume that we need to manually adjust the cost based on the actual situation
ofthe system and testing computer? 

I have been conducting research on SQL performance issues in PG recently, and many of the problems I am currently
studyingare related to it. I would like to consult with you. If the default cost value of the optimizer is not set
reliably,resulting in the selection of a poor execution plan and significant performance differences, is this
considereda direction for optimizer performance optimization? I think if we stand from the user's perspective, this may
bedifficult to detect, leading to significant performance losses. 

—
Regards,
hongjun xiao

2025-05-15 16:45:03 "Dilip Kumar" <dilipbalaut@gmail.com> 写道:
> On Wed, May 14, 2025 at 5:55 PM 萧鸿骏 <23031212454@stu.xidian.edu.cn> wrote:
> >
> > The following method can also trigger a better plan:
> >
> > database0=# set hash_mem_multiplier = 4;
> > SET
> > database0=# EXPLAIN ANALYZE SELECT t0.c0, t2.c0, t5.c0, t4.c0 FROM ONLY t0, t4 CROSS JOIN ONLY t5 JOIN ONLY t2 ON
lower_inc(((t2.c0)*(t5.c0)))GROUP BY t0.c0, t2.c0, t5.c0, t4.c0;
 
> >                                                        QUERY PLAN
> >
------------------------------------------------------------------------------------------------------------------------
> >  HashAggregate  (cost=8854.70..10204.70 rows=135000 width=53) (actual time=196.753..284.803 rows=184320 loops=1)
> >    Group Key: t0.c0, t2.c0, t5.c0, t4.c0
> >    Batches: 5  Memory Usage: 16433kB  Disk Usage: 11328kB
> >    ->  Nested Loop  (cost=0.00..7504.70 rows=135000 width=53) (actual time=0.046..52.982 rows=288000 loops=1)
> >          ->  Nested Loop  (cost=0.00..304.70 rows=4500 width=40) (actual time=0.039..3.358 rows=9600 loops=1)
> >                ->  Nested Loop  (cost=0.00..34.70 rows=180 width=27) (actual time=0.032..0.444 rows=384 loops=1)
> >                      Join Filter: lower_inc((t2.c0 * t5.c0))
> >                      Rows Removed by Join Filter: 156
> >                      ->  Seq Scan on t2  (cost=0.00..1.20 rows=20 width=14) (actual time=0.017..0.022 rows=20
loops=1)
> >                      ->  Seq Scan on t5  (cost=0.00..1.27 rows=27 width=13) (actual time=0.002..0.004 rows=27
loops=20)
> >                ->  Seq Scan on t4  (cost=0.00..1.25 rows=25 width=13) (actual time=0.001..0.004 rows=25 loops=384)
> >          ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13) (actual time=0.001..0.002 rows=30 loops=9600)
> >  Planning Time: 0.109 ms
> >  Execution Time: 291.546 ms
> > (14 rows)
> 
> By setting a higher value for hash_mem_multiplier, you're essentially
> informing the planner that more memory is available for hash-based
> operations. During cost estimation, the planner compares the costs of
> HashAggregate versus GroupAggregate. With a higher memory limit for
> building the hash table, the planner determines that HashAggregate is
> more cost-effective and therefore selects it over GroupAggregate.
> 
> However, during further investigation, I hadn't increased
> hash_mem_multiplier. Instead, I forced the use of hash aggregation by
> disabling other options with SET enable_incremental_sort = off; and
> SET enable_sort = off;. Even without adjusting hash_mem_multiplier,
> the HashAggregate was significantly cheaper than GroupAggregate.  Are
> these points that we are overestimating the cost of the hash
> aggregate?  Not sure, might need to dig further down, but one thing to
> note is that the planner cost shows a hash aggregate with 2x costlier
> than the group aggregate[1][2], whereas the actual execution shows
> that the hash aggregate is 4x faster than the group aggregate.
> 
> Another point to be noted is that as soon as we set the
> hash_mem_multiplier  to 4, then the HashAggregate planner cost drops
> from 20k to 4k [3], then I suspect this could be related to we are
> overestimating for the disk access as our default random page cost is
> quite high and might not be suited well for the faster disk, and as
> soon as I changed the random_page_cost to 1 it chose HashAggregate by
> default.
> 
> Summary: IMHO, this doesn't look like an issue, instead, we just need
> to adjust the random page cost, which is more suited for the system
> under test.
> 
> [1]
> Explain, analyze with grouping, aggregate:
>  Group  (cost=332.55..13010.57 rows=135000 width=53) (actual
> time=104.887..2426.446 rows=184320.00 loops=1)
>    Group Key: t0.c0, t2.c0, t5.c0, t4.c0
>    Buffers: shared hit=8
>    ->  Incremental Sort  (cost=332.55..11660.57 rows=135000 width=53)
> (actual time=104.884..2222.501 rows=288000.00 loops=1)
>          Sort Key: t0.c0, t2.c0, t5.c0, t4.c0
>          Presorted Key: t0.c0
>          Full-sort Groups: 30  Sort Method: quicksort  Average Memory:
> 29kB  Peak Memory: 29kB
>          Pre-sorted Groups: 30  Sort Method: quicksort  Average
> Memory: 1051kB  Peak Memory: 1054kB
>          Buffers: shared hit=8
>          ->  Nested Loop  (cost=0.14..1780.87 rows=135000 width=53)
> (actual time=0.223..77.066 rows=288000.00 loops=1)
>                Buffers: shared hit=5
>                ->  Index Only Scan using i0 on t0  (cost=0.14..12.59
> rows=30 width=13) (actual time=0.075..0.217 rows=30.00 loops=1)
>                      Heap Fetches: 30
>                      Index Searches: 1
>                      Buffers: shared hit=2
>                ->  Materialize  (cost=0.00..92.03 rows=4500 width=40)
> (actual time=0.005..0.819 rows=9600.00 loops=30)
>                      Storage: Memory  Maximum Storage: 873kB
>                      Buffers: shared hit=3
>                      ->  Nested Loop  (cost=0.00..69.53 rows=4500
> width=40) (actual time=0.132..4.110 rows=9600.00 loops=1)
>                            Buffers: shared hit=3
>                            ->  Nested Loop  (cost=0.00..11.97 rows=180
> width=27) (actual time=0.108..0.812 rows=384.00 loops=1)
>                                  Join Filter: lower_inc((t2.c0 * t5.c0))
>                                  Rows Removed by Join Filter: 156
>                                  Buffers: shared hit=2
>                                  ->  Seq Scan on t5  (cost=0.00..1.27
> rows=27 width=13) (actual time=0.026..0.034 rows=27.00 loops=1)
>                                        Buffers: shared hit=1
>                                  ->  Materialize  (cost=0.00..1.30
> rows=20 width=14) (actual time=0.001..0.003 rows=20.00 loops=27)
>                                        Storage: Memory  Maximum Storage: 17kB
>                                        Buffers: shared hit=1
>                                        ->  Seq Scan on t2
> (cost=0.00..1.20 rows=20 width=14) (actual time=0.009..0.013
> rows=20.00 loops=1)
>                                              Buffers: shared hit=1
>                            ->  Materialize  (cost=0.00..1.38 rows=25
> width=13) (actual time=0.000..0.002 rows=25.00 loops=384)
>                                  Storage: Memory  Maximum Storage: 18kB
>                                  Buffers: shared hit=1
>                                  ->  Seq Scan on t4  (cost=0.00..1.25
> rows=25 width=13) (actual time=0.019..0.023 rows=25.00 loops=1)
>                                        Buffers: shared hit=1
>  Planning:
>    Buffers: shared hit=104
>  Planning Time: 2.162 ms
>  Execution Time: 2449.420 ms
> 
> [2]
> Explain, analyze with Hash aggregate: (SET enable_incremental_sort =
> off; and SET enable_sort = off;)
> 
> HashAggregate  (cost=16355.28..20342.00 rows=135000 width=53) (actual
> time=485.060..690.215 rows=184320.00 loops=1)
>    Group Key: t0.c0, t2.c0, t5.c0, t4.c0
>    Planned Partitions: 4  Batches: 5  Memory Usage: 8249kB  Disk Usage: 15568kB
>    Buffers: shared hit=4, temp read=1739 written=3234
>    ->  Nested Loop  (cost=0.00..1758.41 rows=135000 width=53) (actual
> time=0.191..56.042 rows=288000.00 loops=1)
>          Buffers: shared hit=4
>          ->  Nested Loop  (cost=0.00..69.53 rows=4500 width=40)
> (actual time=0.164..2.533 rows=9600.00 loops=1)
>                Buffers: shared hit=3
>                ->  Nested Loop  (cost=0.00..11.97 rows=180 width=27)
> (actual time=0.127..0.602 rows=384.00 loops=1)
>                      Join Filter: lower_inc((t2.c0 * t5.c0))
>                      Rows Removed by Join Filter: 156
>                      Buffers: shared hit=2
>                      ->  Seq Scan on t5  (cost=0.00..1.27 rows=27
> width=13) (actual time=0.035..0.056 rows=27.00 loops=1)
>                            Buffers: shared hit=1
>                      ->  Materialize  (cost=0.00..1.30 rows=20
> width=14) (actual time=0.001..0.003 rows=20.00 loops=27)
>                            Storage: Memory  Maximum Storage: 17kB
>                            Buffers: shared hit=1
>                            ->  Seq Scan on t2  (cost=0.00..1.20
> rows=20 width=14) (actual time=0.011..0.015 rows=20.00 loops=1)
>                                  Buffers: shared hit=1
>                ->  Materialize  (cost=0.00..1.38 rows=25 width=13)
> (actual time=0.000..0.001 rows=25.00 loops=384)
>                      Storage: Memory  Maximum Storage: 18kB
>                      Buffers: shared hit=1
>                      ->  Seq Scan on t4  (cost=0.00..1.25 rows=25
> width=13) (actual time=0.026..0.031 rows=25.00 loops=1)
>                            Buffers: shared hit=1
>          ->  Materialize  (cost=0.00..1.45 rows=30 width=13) (actual
> time=0.000..0.001 rows=30.00 loops=9600)
>                Storage: Memory  Maximum Storage: 18kB
>                Buffers: shared hit=1
>                ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13)
> (actual time=0.019..0.024 rows=30.00 loops=1)
>                      Buffers: shared hit=1
>  Planning:
>    Buffers: shared hit=103
>  Planning Time: 2.129 ms
>  Execution Time: 698.681 ms
> 
> [3]
>  HashAggregate  (cost=3108.41..4458.41 rows=135000 width=53) (actual
> time=489.975..612.891 rows=184320.00 loops=1)
>    Group Key: t0.c0, t2.c0, t5.c0, t4.c0
>    Batches: 5  Memory Usage: 16441kB  Disk Usage: 7504kB
>    Buffers: shared hit=4, temp read=790 written=1523
>    ->  Nested Loop  (cost=0.00..1758.41 rows=135000 width=53) (actual
> time=0.334..55.897 rows=288000.00 loops=1)
>          Buffers: shared hit=4
>          ->  Nested Loop  (cost=0.00..69.53 rows=4500 width=40)
> (actual time=0.302..2.675 rows=9600.00 loops=1)
>                Buffers: shared hit=3
>                ->  Nested Loop  (cost=0.00..11.97 rows=180 width=27)
> (actual time=0.254..0.699 rows=384.00 loops=1)
>                      Join Filter: lower_inc((t2.c0 * t5.c0))
>                      Rows Removed by Join Filter: 156
>                      Buffers: shared hit=2
>                      ->  Seq Scan on t5  (cost=0.00..1.27 rows=27
> width=13) (actual time=0.111..0.121 rows=27.00 loops=1)
>                            Buffers: shared hit=1
>                      ->  Materialize  (cost=0.00..1.30 rows=20
> width=14) (actual time=0.002..0.004 rows=20.00 loops=27)
>                            Storage: Memory  Maximum Storage: 17kB
>                            Buffers: shared hit=1
>                            ->  Seq Scan on t2  (cost=0.00..1.20
> rows=20 width=14) (actual time=0.033..0.039 rows=20.00 loops=1)
>                                  Buffers: shared hit=1
>                ->  Materialize  (cost=0.00..1.38 rows=25 width=13)
> (actual time=0.000..0.001 rows=25.00 loops=384)
>                      Storage: Memory  Maximum Storage: 18kB
>                      Buffers: shared hit=1
>                      ->  Seq Scan on t4  (cost=0.00..1.25 rows=25
> width=13) (actual time=0.035..0.053 rows=25.00 loops=1)
>                            Buffers: shared hit=1
>          ->  Materialize  (cost=0.00..1.45 rows=30 width=13) (actual
> time=0.000..0.001 rows=30.00 loops=9600)
>                Storage: Memory  Maximum Storage: 18kB
>                Buffers: shared hit=1
>                ->  Seq Scan on t0  (cost=0.00..1.30 rows=30 width=13)
> (actual time=0.023..0.030 rows=30.00 loops=1)
>                      Buffers: shared hit=1
>  Planning Time: 0.763 ms
>  Execution Time: 622.127 ms
> (31 rows)
> 
> 
> --
> Regards,
> Dilip Kumar
> EnterpriseDB: http://www.enterprisedb.com

On Thu, May 15, 2025 at 6:42 PM 萧鸿骏 <23031212454@stu.xidian.edu.cn> wrote:
>
> Thank you very much for your reply! May I assume that we need to manually adjust the cost based on the actual
situationof the system and testing computer? 
>
> I have been conducting research on SQL performance issues in PG recently, and many of the problems I am currently
studyingare related to it. I would like to consult with you. If the default cost value of the optimizer is not set
reliably,resulting in the selection of a poor execution plan and significant performance differences, is this
considereda direction for optimizer performance optimization? I think if we stand from the user's perspective, this may
bedifficult to detect, leading to significant performance losses. 
>

Multiple factors influence the planner parameters seq_page_cost and
random_page_cost. PostgreSQL sets these to conservative default values
that aim to strike a balance, not assuming either extreme. On one end,
you might have a system where most data is on a slow spinning disk,
while on the other, you may have a machine with large amounts of RAM
such that almost all data is cached and disk I/O is rarely needed.

These cost parameters directly influence the planner’s choice between
access paths, for example, index scan vs sequential scan: An index
scan involves random I/O, since it needs to jump around the heap to
fetch matching rows by TID. A sequential scan reads the table
linearly, which is generally faster on disk due to fewer seeks.

Now, suppose your WHERE clause filters out 50% of the rows. The
planner might estimate that an index scan would involve a high cost
due to frequent random page reads, especially since the default
random_page_cost is 4 times higher than seq_page_cost. As a result, it
may choose a sequential scan as the cheaper plan.

However, if most of your data is already in RAM, there is no
meaningful difference between random and sequential page reads; both
are fast. In such a case, the planner’s assumptions (based on default
cost values) can lead to a suboptimal plan, not because of a bug, but
because it's working with inaccurate cost estimates relative to your
hardware.

So while the defaults work well for many systems, if you’re noticing
suboptimal plans, especially on machines with a lot of RAM or fast
SSDs, it's worth tuning these parameters. For systems that mostly
serve static data and have high cache hit ratios, reducing both
seq_page_cost and random_page_cost (and possibly making them equal)
may help the planner make better decisions.

This is just my opinion, and others may think differently.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com