Re: Upgrade from PG12 to PG - Mailing list pgsql-admin
From | Jef Mortelle |
---|---|
Subject | Re: Upgrade from PG12 to PG |
Date | |
Msg-id | 1cb1652e-286a-077e-950d-516970ecf499@gmail.com Whole thread Raw |
In response to | Re: Upgrade from PG12 to PG (Scott Ribe <scott_ribe@elevated-dev.com>) |
Responses |
Re: Upgrade from PG12 to PG
Re: Upgrade from PG12 to PG |
List | pgsql-admin |
running /usr/lib/postgresql15/bin/pg_upgrade -v -p 5431 -P 5432 -k gives you in the output : ... "/usr/lib/postgresql15/bin/pg_dump" --host /pg/PG15 --port 5431 --username postgres --schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose --file="/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/dump/pg_upgrade_dump_16385.custom" 'dbname=dbname' >> "/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/log/pg_upgrade_dump_16385.log" 2>&1 ... so, yes pg_ugrade start a pg_dump session, even a pg_restore at the end, runs about 7 hours "/usr/lib/postgresql15/bin/pg_restore" --host /pg/PG15 --port 5432 --username postgres --create --exit-on-error --verbose --dbname template1 "/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/dump/pg_upgrade_dump_16385.custom" >> "/pg/PG15/system/pg_upgrade_output.d/20230720T184333.610/log/pg_upgrade_dump_16385.log" 2>&1 with a lot of lines in the logfiles like: pg_restore: executing BLOB 11108809 Server is a VM server, my VM has 64GB SuseSLES attached to a SAN with SSD disk (Hp3Par) 2)vacuumlo lovacum -n => I have vacuumlo, running this: vacuumlo dbname -v postgres@dbfluxd02:/pg/data> vacuumlo dflux -v Connected to database "dflux" Checking message_oid in fluxnaf.flux_message_content Checking payload_oid in fluxnaf.bridge_payload_content Successfully removed 0 large objects from database "dflux". and again same issues On 20/07/2023 16:51, Scott Ribe wrote: >> On Jul 20, 2023, at 7:46 AM, Jef Mortelle <jefmortelle@gmail.com> wrote: >> >> So: not possible to have very little downtime if you have a database with al lot rows containing text as datatype, aspg_upgrade needs 12hr for 24 milj rows in pg_largeobject. > We need to get terminology straight, as at the moment your posts are very confusing. In PostgreSQL large objects and textare not the same. Text is basically varchar without a specified length limit. Large object is a blob (but not what SQLcalls a BLOB)--it is kind of like a file stored outside the normal table mechanism, and provides facilities for partialreads, etc: https://www.postgresql.org/docs/15/largeobjects.html. There are a number of ways to wind up with referencesto large objects all deleted, but the orphaned large objects still in the database. > > First thing you should do: run lovacuum -n to find out if you have orphaned large objects. If so, start cleaning thoseup, then see how long pg_upgrade takes. > > Second, what's your hardware? I really don't see dump & restore of a 1TB database taking 6 hours. > >> Alsready tried to use --link and --jobs, but you cannot ommit the "select lo_unlink ...." for every rows containingdatatype text in your database that the pg_* program creates in the export/dump file. > Terminology again, or are you conflating two different issues? pg_upgrade --link does not create a dump file.
pgsql-admin by date: