Re: pglogical performance for copying large table - Mailing list pgsql-admin

From srinivas oguri
Subject Re: pglogical performance for copying large table
Date
Msg-id CADfH0ysdH6A8WgNPhT9FNrJjEsGZaNqtXd60jJ-J-mCuZKkA0g@mail.gmail.com
Whole thread Raw
In response to Re: pglogical performance for copying large table  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-admin
>> What does this mean in terms of parameters? Are all of them being used?

No, actually it is restricted to only one process which is running copy command.

>> Pg_dump and pg_restore
Basically this is one of the database we have largest database which is of 20 TB. We would like to configure replication by which we will be able to switch with less downtime.

Is it possible to configure the logical replication with pg_dump for initial data copy ? Can you please help me with detailed steps.



On Tue, Feb 14, 2023, 4:07 AM Jeff Janes <jeff.janes@gmail.com> wrote:


On Mon, Feb 13, 2023 at 1:30 PM srinivas oguri <srinivasoguri7@gmail.com> wrote:

I have set the parallel processes for logical replication as 12.

What does this mean in terms of parameters? Are all of them being used?
 
Cheers,

Jeff

pgsql-admin by date:

Previous
From: Jeff Janes
Date:
Subject: Re: pglogical performance for copying large table
Next
From: Raj kumar
Date:
Subject: Load 500 GB test data with Large objects and different types