Thread: replication slots replicated to standbys?
Someone reported that a replication slot that existed at the time a base backup was done on the master was copied to the standby. Because they didn't realize it, their WAL was not being recycled on the standby. Is that possible? Is it a known behavior? I don't see it documented. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Sat, Aug 20, 2016 at 1:39 PM, Bruce Momjian <bruce@momjian.us> wrote: > Someone reported that a replication slot that existed at the time a base > backup was done on the master was copied to the standby. Because they > didn't realize it, their WAL was not being recycled on the standby. > > Is that possible? Is it a known behavior? I don't see it documented. From backup.sgml: <para> It is often a good idea to also omit from the backup the files within the cluster's <filename>pg_replslot/</>directory, so that replication slots that exist on the master do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL fileson the standby, and possibly bloat on the master if hot standby feedback is enabled, because the clients that areusing those replication slots will still be connecting to and updating the slots on the master, not the standby. Evenif the backup is only intended for use in creating a new master, copying the replication slots isn't expected tobe particularly useful, since the contents of those slots will likely be badly out of date by the time the new mastercomes on line. </para> Note as well that pg_basebackup omits its content and creates an empty directory. -- Michael
On Sat, Aug 20, 2016 at 01:43:42PM +0900, Michael Paquier wrote: > On Sat, Aug 20, 2016 at 1:39 PM, Bruce Momjian <bruce@momjian.us> wrote: > > Someone reported that a replication slot that existed at the time a base > > backup was done on the master was copied to the standby. Because they > > didn't realize it, their WAL was not being recycled on the standby. > > > > Is that possible? Is it a known behavior? I don't see it documented. > > >From backup.sgml: > <para> > It is often a good idea to also omit from the backup the files > within the cluster's <filename>pg_replslot/</> directory, so that > replication slots that exist on the master do not become part of the > backup. Otherwise, the subsequent use of the backup to create a standby > may result in indefinite retention of WAL files on the standby, and > possibly bloat on the master if hot standby feedback is enabled, because > the clients that are using those replication slots will still be connecting > to and updating the slots on the master, not the standby. Even if the > backup is only intended for use in creating a new master, copying the > replication slots isn't expected to be particularly useful, since the > contents of those slots will likely be badly out of date by the time > the new master comes on line. > </para> > > Note as well that pg_basebackup omits its content and creates an empty > directory. Seems like another good idea to use pg_basebackup rather than manually doing base backups; Magnus has been saying this for a while. I supposed there is no way we could remove this error-prone behavior because replication slots must survive server restarts. Is there no way to know if we are starting a standby from a fresh base backup vs. restarting a standby? In that case we could clear the replication slots. Are there any other error-prone things copied from the master? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
<p dir="ltr"><p dir="ltr">On 21 Aug 2016 12:36 AM, "Bruce Momjian" <<a href="mailto:bruce@momjian.us">bruce@momjian.us</a>>wrote:<br /> ><br /> > On Sat, Aug 20, 2016 at 01:43:42PM +0900,Michael Paquier wrote:<br /> > > On Sat, Aug 20, 2016 at 1:39 PM, Bruce Momjian <<a href="mailto:bruce@momjian.us">bruce@momjian.us</a>>wrote:<br /> > > > Someone reported that a replication slotthat existed at the time a base<br /> > > > backup was done on the master was copied to the standby. Becausethey<br /> > > > didn't realize it, their WAL was not being recycled on the standby.<br /> > > ><br/> > > > Is that possible? Is it a known behavior? I don't see it documented.<br /> > ><br /> >> >From backup.sgml:<br /> > > <para><br /> > > It is often a good idea to also omit fromthe backup the files<br /> > > within the cluster's <filename>pg_replslot/</> directory, so that<br/> > > replication slots that exist on the master do not become part of the<br /> > > backup. Otherwise, the subsequent use of the backup to create a standby<br /> > > may result in indefinite retentionof WAL files on the standby, and<br /> > > possibly bloat on the master if hot standby feedback is enabled,because<br /> > > the clients that are using those replication slots will still be connecting<br /> >> to and updating the slots on the master, not the standby. Even if the<br /> > > backup is only intendedfor use in creating a new master, copying the<br /> > > replication slots isn't expected to be particularlyuseful, since the<br /> > > contents of those slots will likely be badly out of date by the time<br/> > > the new master comes on line.<br /> > > </para><br /> > ><br /> > > Noteas well that pg_basebackup omits its content and creates an empty<br /> > > directory.<br /> ><br /> > Seemslike another good idea to use pg_basebackup rather than manually<br /> > doing base backups; Magnus has been sayingthis for a while.<p dir="ltr">The main time that's an issue is when you're rsync'ing to save bandwidth, using CoW volumesnapshots, etc. pg_basebackup becomes totally impractical on big systems.<p dir="ltr">> I supposed there is no waywe could remove this error-prone behavior<br /> > because replication slots must survive server restarts. Is thereno way<br /> > to know if we are starting a standby from a fresh base backup vs.<br /> > restarting a standby? In that case we could clear the replication<br /> > slots. Are there any other error-prone things copied fromthe master?<p dir="ltr">We could remove slots when we enter archive recovery. But I've recently implememted support forlogical decoding from standbys, which needs slots. Physical slot use on standby is also handy. We cannot tell whethera slot was created on the replica or created on the master and copied in the base backup and don't want to drop slotscreated on the replica.<p dir="ltr">I also have use cases for slots being retained in restore from snapshot, for re-integratingrestored nodes into an MM mesh.<p dir="ltr">I think a recovery.conf option to remove all slots during archiverecovery could be handy. But mostly it comes down to tools not copying them.
On Sun, Aug 21, 2016 at 1:24 PM, Craig Ringer <craig.ringer@2ndquadrant.com> wrote: > On 21 Aug 2016 12:36 AM, "Bruce Momjian" <bruce@momjian.us> wrote: >> Seems like another good idea to use pg_basebackup rather than manually >> doing base backups; Magnus has been saying this for a while. > > The main time that's an issue is when you're rsync'ing to save bandwidth, > using CoW volume snapshots, etc. pg_basebackup becomes totally impractical > on big systems. Yes, and that's not fun. Particularly when the backup takes so long that WAL has already been recycled... Replication slots help here but the partitions dedicated to pg_xlog have their limit as well. >> I supposed there is no way we could remove this error-prone behavior >> because replication slots must survive server restarts. Is there no way >> to know if we are starting a standby from a fresh base backup vs. >> restarting a standby? In that case we could clear the replication >> slots. Are there any other error-prone things copied from the master? > > We could remove slots when we enter archive recovery. But I've recently > implemented support for logical decoding from standbys, which needs slots. > Physical slot use on standby is also handy. We cannot tell whether a slot > was created on the replica or created on the master and copied in the base > backup and don't want to drop slots created on the replica. > > I also have use cases for slots being retained in restore from snapshot, for > re-integrating restored nodes into an MM mesh. > > I think a recovery.conf option to remove all slots during archive recovery > could be handy. But mostly it comes down to tools not copying them. Yes, I'd personally let recovery.conf out of that, as well as the removal of replication slot data when archive recovery begins to keep the configuration simple. The decision-making of the data included in any backup will be done by the tool itself anyway.. -- Michael
On Sun, Aug 21, 2016 at 1:35 AM, Bruce Momjian <bruce@momjian.us> wrote: > On Sat, Aug 20, 2016 at 01:43:42PM +0900, Michael Paquier wrote: >> Note as well that pg_basebackup omits its content and creates an empty >> directory. > > Are there any other error-prone things copied from the master? The contents of pg_snapshots get copied by pg_basebackup. Those are useless in a backup, but harmless. -- Michael
On 22 August 2016 at 10:31, Michael Paquier <michael.paquier@gmail.com> wrote:
On Sun, Aug 21, 2016 at 1:24 PM, Craig Ringer
<craig.ringer@2ndquadrant.com> wrote:
> On 21 Aug 2016 12:36 AM, "Bruce Momjian" <bruce@momjian.us> wrote:
>> Seems like another good idea to use pg_basebackup rather than manually
>> doing base backups; Magnus has been saying this for a while.
>
> The main time that's an issue is when you're rsync'ing to save bandwidth,
> using CoW volume snapshots, etc. pg_basebackup becomes totally impractical
> on big systems.
Yes, and that's not fun. Particularly when the backup takes so long
that WAL has already been recycled... Replication slots help here but
the partitions dedicated to pg_xlog have their limit as well.
We can and probably should allow XLogReader to invoke restore_command to fetch WAL, read it, and discard/recycle it again. This would greatly alleviate the pain of indefinite xlog retention.
It's a pain to do so while recovery.conf is its own separate magic though, not part of postgresql.conf.
I have no plans to work on this at this time.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services