Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9 - Mailing list pgsql-performance
From | Christopher Kings-Lynne |
---|---|
Subject | Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9 |
Date | |
Msg-id | 427B2D9E.2030804@familyhealth.com.au Whole thread Raw |
In response to | Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9 (Jona <jonanews@oismail.com>) |
Responses |
Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9
|
List | pgsql-performance |
You didn't do analyze. Chris Jona wrote: > Results of VACUUM VERBOSE from both servers > > Test server: > comm=# VACUUM VERBOSE StatCon_Tbl; > INFO: --Relation public.statcon_tbl-- > INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0. > Total CPU 0.02s/0.00u sec elapsed 0.04 sec. > INFO: --Relation pg_toast.pg_toast_179851-- > INFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0, > UnUsed 0. > Total CPU 4.03s/0.40u sec elapsed 70.99 sec. > VACUUM > > Live Server: > comm=# VACUUM VERBOSE StatCon_Tbl; > INFO: --Relation public.statcon_tbl-- > INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101. > Total CPU 0.01s/0.00u sec elapsed 0.60 sec. > INFO: --Relation pg_toast.pg_toast_891830-- > INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, > UnUsed 5487. > Total CPU 4.44s/0.34u sec elapsed 35.48 sec. > VACUUM > > Cheers > Jona > > Tom Lane wrote: > >>Jona <jonanews@oismail.com> <mailto:jonanews@oismail.com> writes: >> >> >>>I'm currently experiencing problems with long query execution times. >>>What I believe makes these problems particularly interesting is the >>>difference in execution plans between our test server running PostGreSQL >>>7.3.6 and our production server running PostGreSQL 7.3.9. >>>The test server is an upgraded "home machine", a Pentium 4 with 1GB of >>>memory and IDE disk. >>>The production server is a dual CPU XEON Pentium 4 with 2GB memory and >>>SCSI disks. >>>One should expect the production server to be faster, but appearently >>>not as the outlined query plans below shows. >>> >>> >>I think the plans are fine; it looks to me like the production server >>has serious table-bloat or index-bloat problems, probably because of >>inadequate vacuuming. For instance compare these entries: >> >>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4) >> Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1)) >> >>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4) >> Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1)) >> >>Appears to be exactly the same task ... but the test server spent >>1.24 msec total while the production server spent 687.36 msec total. >>That's more than half of your problem right there. Some of the other >>scans seem a lot slower on the production machine too. >> >> >> >>>1) How come the query plans between the 2 servers are different? >>> >>> >>The production server's rowcount estimates are pretty good, the test >>server's are not. How long since you vacuumed/analyzed the test server? >> >>It'd be interesting to see the output of "vacuum verbose statcon_tbl" >>on both servers ... >> >> regards, tom lane >> >>PS: if you post any more query plans, please try to use software that >>doesn't mangle the formatting so horribly ... >> >>---------------------------(end of broadcast)--------------------------- >>TIP 4: Don't 'kill -9' the postmaster >> >>
pgsql-performance by date: