Improving pg_dump performance - Mailing list pgsql-general

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that 
needs to be migrated to a new data center and then restored to v9.6.9.

The database has many large tables full of bytea columns containing pdf 
images, and so the dump file is going to be more than 2x larger than the 
existing data/base...


The command is:
$ pg_dump -v -Z0 -Fc $DB --file=${TARGET}/${DATE}_${DB}.dump 2> 
${DATE}_${DB}.log

Using -Z0 because pdf files are already compressed.

Because of an intricate web of FK constraints and partitioned tables, the 
customer doesn't trust a set of "partitioned" backups using --table= and 
regular expressions (the names of those big tables all have the year in 
them), and so am stuck with a single-threaded backup.

Are there any config file elements that I can tweak (extra points for not 
having to restart postgres) to make it run faster, or deeper knowledge of 
how pg_restore works so that I could convince them to let me do the 
partitioned backups?

Lastly, is there any way to not make the backups so large (maybe by using 
the --binary-upgrade option, even though the man page says, "in-place 
upgrades only")?

-- 
Angular momentum makes the world go 'round.


pgsql-general by date:

Previous
From: Achilleas Mantzios
Date:
Subject: Re: Can't compile postgresql 11 on FreeBSD 11.1
Next
From: Andreas Kretschmer
Date:
Subject: Re: Improving pg_dump performance