Thread: Understanding the output of EXPLAIN ANALYSE
I am a newbie to PostgreSQL and am a little confused by the execution plan of a query we are working. The SQL does some joins and the resulting plan looked like this:
"Nested Loop (cost=78.63..101.14 rows=1 width=256) (actual time=0.000..16.000 rows=2818 loops=1)"
…
"Total runtime: 16.000 ms"
So the cost was 101. We put indexes on the two tables that were being joined and re-ran EXPLAIN ANALYZE. The cost went way down, but the execution time went way up. What am I missing?
"Nested Loop (cost=0.00..15.28 rows=1 width=256) (actual time=0.000..110.000 rows=2818 loops=1)"
…
"Total runtime: 110.000 ms"
On Thu, Jul 23, 2009 at 6:14 PM, Thurber, Fred<Fred.Thurber@gdit.com> wrote: > "Nested Loop (cost=78.63..101.14 rows=1 width=256) (actual > time=0.000..16.000 rows=2818 loops=1)" Well it was expecting 1 row and got 2,818 so the costs are going to be pretty wildly wrong. But since you cut out all the details it's pretty hard to guess what happened. -- greg http://mit.edu/~gsstark/resume.pdf
On Thu, Jul 23, 2009 at 12:14 PM, Thurber, Fred<Fred.Thurber@gdit.com> wrote: > > So the cost was 101. We put indexes on the two tables that were being > joined and re-ran EXPLAIN ANALYZE. The cost went way down, but the > execution time went way up. What am I missing? > > "Nested Loop (cost=0.00..15.28 rows=1 width=256) (actual > time=0.000..110.000 rows=2818 loops=1)" > … > "Total runtime: 110.000 ms" > run ANALYZE and rerun "explain analyze your_query" -- Atentamente, Jaime Casanova Soporte y capacitación de PostgreSQL Asesoría y desarrollo de sistemas Guayaquil - Ecuador Cel. +59387171157
Here is the full output: "Nested Loop (cost=78.63..101.14 rows=1 width=256) (actual time=0.000..16.000 rows=2818 loops=1)" " -> Nested Loop (cost=78.63..95.12 rows=1 width=148) (actual time=0.000..0.000 rows=1 loops=1)" " -> Merge Join (cost=78.63..89.24 rows=1 width=100) (actual time=0.000..0.000 rows=1 loops=1)" " Merge Cond: (("outer".gridid = "inner".gridid) AND ("outer".subcenterid = "inner".subcenterid) AND ("outer".centerid= "inner".centerid))" " -> Sort (cost=43.69..45.19 rows=600 width=80) (actual time=0.000..0.000 rows=1 loops=1)" " Sort Key: datasetdir.gridid, datasetdir.subcenterid, datasetdir.centerid" " -> Seq Scan on mdgriddatasetdir datasetdir (cost=0.00..16.00 rows=600 width=80) (actual time=0.000..0.000rows=1 loops=1)" " -> Sort (cost=34.94..36.09 rows=460 width=26) (actual time=0.000..0.000 rows=50 loops=1)" " Sort Key: reg.gridid, reg.subcenterid, reg.centerid" " -> Seq Scan on mdgridregistrtions reg (cost=0.00..14.60 rows=460 width=26) (actual time=0.000..0.000rows=50 loops=1)" " -> Index Scan using mdgridaois_pkey on mdgridaois aois (cost=0.00..5.87 rows=1 width=56) (actual time=0.000..0.000rows=1 loops=1)" " Index Cond: (aois.aoiid = "outer".aoiid)" " Filter: ((northlat > 18.0) AND (southlat < 30.0) AND (round((((distance(((eastlon - westlon))::double precision)+ distance(((116.0 - eastlon))::double precision)) + distance(12::double precision)) + distance(((westlon - 128.0))::doublepreci (..)" " -> Index Scan using bobs_grid_index on g580000266058 grids (cost=0.00..6.01 rows=1 width=116) (actual time=0.000..0.000rows=2818 loops=1)" " Index Cond: (("outer".datasetid = grids.datasetid) AND (grids.validtime >= 1197867600) AND (grids.validtime <= 1197954000))" "Total runtime: 16.000 ms" Now we added a couple of indexes, and it seemingly slowed down, but the estimated cost was lower: "Nested Loop (cost=0.00..15.28 rows=1 width=256) (actual time=0.000..110.000 rows=2818 loops=1)" " -> Nested Loop (cost=0.00..9.40 rows=1 width=208) (actual time=0.000..32.000 rows=2818 loops=1)" " -> Nested Loop (cost=0.00..3.38 rows=1 width=100) (actual time=0.000..0.000 rows=1 loops=1)" " Join Filter: (("outer".centerid = "inner".centerid) AND ("outer".subcenterid = "inner".subcenterid) AND ("outer".gridid= "inner".gridid))" " -> Seq Scan on mdgriddatasetdir datasetdir (cost=0.00..1.01 rows=1 width=80) (actual time=0.000..0.000 rows=1loops=1)" " -> Seq Scan on mdgridregistrtions reg (cost=0.00..1.50 rows=50 width=26) (actual time=0.000..0.000 rows=50loops=1)" " -> Index Scan using bobs_grid_index on g580000266058 grids (cost=0.00..6.01 rows=1 width=116) (actual time=0.000..0.000rows=2818 loops=1)" " Index Cond: (("outer".datasetid = grids.datasetid) AND (grids.validtime >= 1197867600) AND (grids.validtime<= 1197954000))" " -> Index Scan using mdgridaois_pkey on mdgridaois aois (cost=0.00..5.87 rows=1 width=56) (actual time=0.022..0.028 rows=1loops=2818)" " Index Cond: (aois.aoiid = "outer".aoiid)" " Filter: ((northlat > 18.0) AND (southlat < 30.0) AND (round((((distance(((eastlon - westlon))::double precision)+ distance(((116.0 - eastlon))::double precision)) + distance(12::double precision)) + distance(((westlon - 128.0))::doubleprecision)) (..)" "Total runtime: 110.000 ms"
Thurber, Fred schrieb: > > Now we added a couple of indexes, and it seemingly slowed down, but the estimated cost was lower: which indexes did you add? As far as I can see, there is only bobs_grid_index used in both plans. And just to mention it - did you run VACUUM and / or ANAYLZE? Cheers Andy