Re: db size and VACUUM ANALYZE - Mailing list pgsql-general

From Greg Smith
Subject Re: db size and VACUUM ANALYZE
Date
Msg-id 4B765295.5050806@2ndquadrant.com
Whole thread Raw
In response to Re: db size and VACUUM ANALYZE  (Amitabh Kant <amitabhkant@gmail.com>)
Responses Re: db size and VACUUM ANALYZE
List pgsql-general
Amitabh Kant wrote:
> You need to do VACUUM FULL ANALYZE to claim the disk space, but this
> creates a exclusive lock on the tables.
> See http://www.postgresql.org/docs/8.3/static/sql-vacuum.html

First off, you don't need the ANALYZE in there.

Second, VACUUM FULL is a terrible way to fix a table that's seriously
screwed up--it will take forever to run.  Use CLUSTER to accomplish the
same thing much faster; it basically does the same thing as the
dump/restore step that's restoring good performance to the database.

Before doing that, I would run a VACUUM VERBOSE on the whole cluster and
see if there are any max_fsm_pages warnings in there.  Those settings
might be too low, for example if large deletions are done in batches,
and ultimately be the true cause of this problem.

In general, the answer to most "why is my database getting too big/slow
after it's been up for a while?" questions is "you aren't vacuuming
often enough".  Is autovacuum on?  Are there any long-running
transactions that keep it from working?  There are use patterns where
that's still not good enough, but those are less common than the case
where the basics (use autovacuum and makes sure the FSM parameters are
set correctly) just aren't being done.

--
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.com


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: "could not read block 0... " error followed by "database does not exist"
Next
From: Greg Smith
Date:
Subject: Re: db size and VACUUM ANALYZE