Thread: Very Very Slow Database Restore
I have inherited the on going management of our company's PostgreSQL 8.0 database. I'm having a very serious problem with the restore process. I've been running periodic pg_dumps from our production database and then restoring the data on our development system. That has been working find until now. Previously when I ran: pg_dump -U username database > db.backup I would get me a 800+ MB file and I could restore with psql -U username -d database < db.backup which took 15 minutes or so. Now our database has grown and the pg_dump give me a 2.1 GB file which is taking 12 hours or more to restore, Yickes! I've tried compressing the dump file: pg_dump -F c -U username database > db.backup and then run pg_restore -U username -d database db.backup but no difference and probably its slower because the restore has to decompress the dump. The database configuration is pretty much out of the box. The only thing that's been changed is the number of allowed db connections which I don't think has much barring on this problem. Any thoughts about this problem would be appreciated. Thanks
Joseph McClintock <joe.mcclintock@antics.com> writes: > Now our database has grown and the pg_dump give me a 2.1 GB file which > is taking 12 hours or more to restore, Yickes! I've tried compressing > the dump file: Compressing the dump file will make the restore slower, most likely, because of the extra CPU effort to decompress. It certainly won't create any great savings. > The database configuration is pretty much out of the box. You probably want to increase maintenance_work_mem (to speed index creation) and checkpoint_segments (general rule for any update-intensive situation) ... and for 8.0 you'd maybe need larger shared_buffers etc. The postgresql techdocs website has lots of general advice about parameter settings --- the only thing specific to restore is you want high maintenance_work_mem. regards, tom lane
Thanks for the feed back. Doubling the work_men from 10240 to 20480 seemed to make the process faster but then I ran out of disk space. Shared buffers are at 2000 which I can try increasing. I now need to move the database off the system partition to my working partition which has more space. I hope don't have to re-install Postges, that would be a real bummer. --Joe On Thu, 2006-12-21 at 00:47 -0500, Tom Lane wrote: > Joseph McClintock <joe.mcclintock@antics.com> writes: > > Now our database has grown and the pg_dump give me a 2.1 GB file which > > is taking 12 hours or more to restore, Yickes! I've tried compressing > > the dump file: > > Compressing the dump file will make the restore slower, most likely, > because of the extra CPU effort to decompress. It certainly won't > create any great savings. > > > The database configuration is pretty much out of the box. > > You probably want to increase maintenance_work_mem (to speed index > creation) and checkpoint_segments (general rule for any update-intensive > situation) ... and for 8.0 you'd maybe need larger shared_buffers etc. > The postgresql techdocs website has lots of general advice about > parameter settings --- the only thing specific to restore is you > want high maintenance_work_mem. > > regards, tom lane
On 12/21/06, Joseph McClintock <joe.mcclintock@antics.com> wrote: > Thanks for the feed back. Doubling the work_men from 10240 to 20480 > seemed to make the process faster but then I ran out of disk space. > Shared buffers are at 2000 which I can try increasing. > > I now need to move the database off the system partition to my working > partition which has more space. I hope don't have to re-install > Postges, that would be a real bummer. > Hi Joseph, to move the database cluster to another partition is very simple with postgresql. You need to shutdown the postmaster, after this copy your database cluster directory preserving user and permissions to the new location, update the $PGDATA variable for the user postgres with the value of the new location , startup the postmaster, enjoy ;) Best Regards and Merry Christmas Federico Campoli