Thread: [GENERAL] Interesting streaming replication issue
Hi all,

I've got two servers (A,B) which are part of a streaming replication pair. A is the master, B is a hot standby. I'm sending archived WAL to a directory on A, B is reading it via SCP.
This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm currently working to reproduce with standard 9.6)
We have recently seen a situation where B does not catch up when taken offline for maintenance.
When B is started we see the following in the logs:
2017-07-27 11:55:57 AEST [21432]: [979-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AA" from archive 2017-07-27 11:55:58 AEST [21432]: [980-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AB" from archive 2017-07-27 11:55:58 AEST [21432]: [981-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AC" from archive 2017-07-27 11:55:59 AEST [21432]: [982-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AD" from archive 2017-07-27 11:55:59 AEST [21432]: [983-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AE" from archive 2017-07-27 11:56:00 AEST [21432]: [984-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AF" from archive 2017-07-27 11:56:00 AEST [21432]: [985-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B0" from archive 2017-07-27 11:56:01 AEST [21432]: [986-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B1" from archive 2017-07-27 11:56:01 AEST [21432]: [987-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B2" from archive 2017-07-27 11:56:02 AEST [21432]: [988-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B3" from archive 2017-07-27 11:56:02 AEST [21432]: [989-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B4" from archive 2017-07-27 11:56:03 AEST [21432]: [990-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory 2017-07-27 11:56:03 AEST [46191]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12 2017-07-27 11:56:03 AEST [46191]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removed scp: /archive/xlog//0000000D.history: No such file or directory scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory 2017-07-27 11:56:04 AEST [46203]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12 2017-07-27 11:56:04 AEST [46203]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removed
This will loop indefinitely. At this stage the master reports no connected standbys in pg_stat_replication, and the standby has no running WAL receiver process.
This can be 'fixed' by running pg_switch_xlog() on the master, at which time a connection is seen from the standby and the logs show the following:
scp: /archive/xlog//0000000D.history: No such file or directory
2017-07-27 12:03:19 AEST [21432]: [1029-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive
scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory
2017-07-27 12:03:19 AEST [63141]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12
2017-07-27 12:03:19 AEST [63141]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removed
scp: /archive/xlog//0000000D.history: No such file or directory
2017-07-27 12:03:24 AEST [21432]: [1030-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive
2017-07-27 12:03:24 AEST [21432]: [1031-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B6" from archive
scp: /archive/xlog//0000000C0000005A000000B7: No such file or directory
2017-07-27 12:03:25 AEST [21432]: [1032-1] user=,db=,client= (0:00000)LOG: unexpected pageaddr 56/E7000000 in log segment 0000000C0000005A000000B7, offset 0
2017-07-27 12:03:25 AEST [63196]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary
At this time replication is running as normal and all changes are streamed.
Am I missing something here, this seems very odd. One thing I have noticed is it only seems to be caused after a lot of WAL is produced and the pg_xlog directory is sitting at max_wal_size
James Sewell,
PostgreSQL Team Lead / Solutions Architect

Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
(sorry for the top post, bitchy K9 Mail)
James,
are you sure you're scp'ing from the archive, not from pg_xlog?
Regards,
Gunnar "Nick" Bluth
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.
James,
are you sure you're scp'ing from the archive, not from pg_xlog?
Regards,
Gunnar "Nick" Bluth
Am 27. Juli 2017 05:00:17 MESZ schrieb James Sewell <james.sewell@jirotech.com>:
Hi all,I've got two servers (A,B) which are part of a streaming replication pair. A is the master, B is a hot standby. I'm sending archived WAL to a directory on A, B is reading it via SCP.This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm currently working to reproduce with standard 9.6)We have recently seen a situation where B does not catch up when taken offline for maintenance.When B is started we see the following in the logs:2017-07-27 11:55:57 AEST [21432]: [979-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AA" from archive 2017-07-27 11:55:58 AEST [21432]: [980-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AB" from archive 2017-07-27 11:55:58 AEST [21432]: [981-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AC" from archive 2017-07-27 11:55:59 AEST [21432]: [982-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AD" from archive 2017-07-27 11:55:59 AEST [21432]: [983-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AE" from archive 2017-07-27 11:56:00 AEST [21432]: [984-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AF" from archive 2017-07-27 11:56:00 AEST [21432]: [985-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B0" from archive 2017-07-27 11:56:01 AEST [21432]: [986-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B1" from archive 2017-07-27 11:56:01 AEST [21432]: [987-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B2" from archive 2017-07-27 11:56:02 AEST [21432]: [988-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B3" from archive 2017-07-27 11:56:02 AEST [21432]: [989-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B4" from archive 2017-07-27 11:56:03 AEST [21432]: [990-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory 2017-07-27 11:56:03 AEST [46191]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12 2017-07-27 11:56:03 AEST [46191]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removed scp: /archive/xlog//0000000D.history: No such file or directory scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory 2017-07-27 11:56:04 AEST [46203]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12 2017-07-27 11:56:04 AEST [46203]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removedThis will loop indefinitely. At this stage the master reports no connected standbys in pg_stat_replication, and the standby has no running WAL receiver process.This can be 'fixed' by running pg_switch_xlog() on the master, at which time a connection is seen from the standby and the logs show the following:scp: /archive/xlog//0000000D.history: No such file or directory2017-07-27 12:03:19 AEST [21432]: [1029-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archivescp: /archive/xlog//0000000C0000005A000000B6: No such file or directory2017-07-27 12:03:19 AEST [63141]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 122017-07-27 12:03:19 AEST [63141]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removedscp: /archive/xlog//0000000D.history: No such file or directory2017-07-27 12:03:24 AEST [21432]: [1030-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive2017-07-27 12:03:24 AEST [21432]: [1031-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B6" from archivescp: /archive/xlog//0000000C0000005A000000B7: No such file or directory2017-07-27 12:03:25 AEST [21432]: [1032-1] user=,db=,client= (0:00000)LOG: unexpected pageaddr 56/E7000000 in log segment 0000000C0000005A000000B7, offset 02017-07-27 12:03:25 AEST [63196]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primaryAt this time replication is running as normal and all changes are streamed.Am I missing something here, this seems very odd. One thing I have noticed is it only seems to be caused after a lot of WAL is produced and the pg_xlog directory is sitting at max_wal_sizeJames Sewell,PostgreSQL Team Lead / Solutions ArchitectSuite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.
are you sure you're scp'ing from the archive, not from pg_xlog?
Yes:
restore_command = 'scp -o StrictHostKeyChecking=no 10.154.19.30:/archive/xlog//%f %p'
Although you are right - that would almost make sense if I had done that!
Cheers,
James
Regards,
Gunnar "Nick" BluthAm 27. Juli 2017 05:00:17 MESZ schrieb James Sewell <james.sewell@jirotech.com>:Hi all,I've got two servers (A,B) which are part of a streaming replication pair. A is the master, B is a hot standby. I'm sending archived WAL to a directory on A, B is reading it via SCP.This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm currently working to reproduce with standard 9.6)We have recently seen a situation where B does not catch up when taken offline for maintenance.When B is started we see the following in the logs:2017-07-27 11:55:57 AEST [21432]: [979-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AA" from archive 2017-07-27 11:55:58 AEST [21432]: [980-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AB" from archive 2017-07-27 11:55:58 AEST [21432]: [981-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AC" from archive 2017-07-27 11:55:59 AEST [21432]: [982-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AD" from archive 2017-07-27 11:55:59 AEST [21432]: [983-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AE" from archive 2017-07-27 11:56:00 AEST [21432]: [984-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000AF" from archive 2017-07-27 11:56:00 AEST [21432]: [985-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B0" from archive 2017-07-27 11:56:01 AEST [21432]: [986-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B1" from archive 2017-07-27 11:56:01 AEST [21432]: [987-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B2" from archive 2017-07-27 11:56:02 AEST [21432]: [988-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B3" from archive 2017-07-27 11:56:02 AEST [21432]: [989-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B4" from archive 2017-07-27 11:56:03 AEST [21432]: [990-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory 2017-07-27 11:56:03 AEST [46191]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12 2017-07-27 11:56:03 AEST [46191]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removed scp: /archive/xlog//0000000D. history: No such file or directory scp: /archive/xlog// 0000000C0000005A000000B6: No such file or directory 2017-07-27 11:56:04 AEST [46203]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 12 2017-07-27 11:56:04 AEST [46203]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removed This will loop indefinitely. At this stage the master reports no connected standbys in pg_stat_replication, and the standby has no running WAL receiver process.This can be 'fixed' by running pg_switch_xlog() on the master, at which time a connection is seen from the standby and the logs show the following:scp: /archive/xlog//0000000D.history: No such file or directory 2017-07-27 12:03:19 AEST [21432]: [1029-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archivescp: /archive/xlog//0000000C0000005A000000B6: No such file or directory 2017-07-27 12:03:19 AEST [63141]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on timeline 122017-07-27 12:03:19 AEST [63141]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000C0000005A000000B5 has already been removedscp: /archive/xlog//0000000D.history: No such file or directory 2017-07-27 12:03:24 AEST [21432]: [1030-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B5" from archive2017-07-27 12:03:24 AEST [21432]: [1031-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000005A000000B6" from archivescp: /archive/xlog//0000000C0000005A000000B7: No such file or directory 2017-07-27 12:03:25 AEST [21432]: [1032-1] user=,db=,client= (0:00000)LOG: unexpected pageaddr 56/E7000000 in log segment 0000000C0000005A000000B7, offset 02017-07-27 12:03:25 AEST [63196]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primaryAt this time replication is running as normal and all changes are streamed.Am I missing something here, this seems very odd. One thing I have noticed is it only seems to be caused after a lot of WAL is produced and the pg_xlog directory is sitting at max_wal_sizeJames Sewell,PostgreSQL Team Lead / Solutions ArchitectSuite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
On Wed, Jul 26, 2017 at 11:55 PM, James Sewell <james.sewell@jirotech.com> wrote:
are you sure you're scp'ing from the archive, not from pg_xlog?Yes:restore_command = 'scp -o StrictHostKeyChecking=no 10.154.19.30:/archive/xlog//%f %p'Although you are right - that would almost make sense if I had done that!
Sounds a lot like a cleanup process on your archive directory or something getting in the way. Are the logs pg is asking for in that archive dir?
are you sure you're scp'ing from the archive, not from pg_xlog?Yes:restore_command = 'scp -o StrictHostKeyChecking=no 10.154.19.30:/archive/xlog//%f %p'Although you are right - that would almost make sense if I had done that!Sounds a lot like a cleanup process on your archive directory or something getting in the way. Are the logs pg is asking for in that archive dir?
That's the strange thing - if you look at the log not only are they there, the standby has already retrieved them.
It's then asking for the log again via the stream.
--
James Sewell,
PostgreSQL Team Lead / Solutions Architect

Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
OK this is reproducible now.
- Stop a standby
- Write some data to the master
- Wait till the master has archived some WAL logs
- Wait till the archived logs have been removed from pg_xlog
- Start the standby.
The standby will recover all logs from the master log archive up to log X, it will then try to get log X+1 and fail (doesn't exist).
It will then try to start streaming log X (not X+1) from the master and fail (it's been archived). This will loop forever, example below.
scp: /archive/xlog//0000000D.history: No such file or directory2017-08-03 10:26:41 AEST [578]: [1037-1] user=,db=,client= (0:00000)LOG: restored log file "0000000C0000006 E000000AE" from archivescp: /archive/xlog//0000000C0000006E000000AF: No such file or directory2017-08-03 10:26:41 AEST [68161]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 6E/AE000000 on timeline 122017-08-03 10:26:41 AEST [68161]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL s tream: ERROR: requested WAL segment 0000000C0000006E000000AE has already been removed
At this stage the standby has log X in pg_xlog, and this log has an identical md5 checksum to the log in the master archive.
Performing a pg_switch_xlog on the master pushes log X+1 to the archive, which is picked up by the standby allowing streaming replication to start.
The only interesting thing I can see in log X is that it's 99% made up of FPI_FOR_HINT records.
Any ideas?
Cheers,
James
James Sewell,
PostgreSQL Team Lead / Solutions Architect

Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
On Fri, Jul 28, 2017 at 6:28 AM, James Sewell <james.sewell@jirotech.com> wrote:
are you sure you're scp'ing from the archive, not from pg_xlog?Yes:restore_command = 'scp -o StrictHostKeyChecking=no 10.154.19.30:/archive/xlog//%f %p'Although you are right - that would almost make sense if I had done that!Sounds a lot like a cleanup process on your archive directory or something getting in the way. Are the logs pg is asking for in that archive dir?That's the strange thing - if you look at the log not only are they there, the standby has already retrieved them.It's then asking for the log again via the stream.--James Sewell,PostgreSQL Team Lead / Solutions ArchitectSuite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
Hi, On 2017-07-27 13:00:17 +1000, James Sewell wrote: > Hi all, > > I've got two servers (A,B) which are part of a streaming replication pair. > A is the master, B is a hot standby. I'm sending archived WAL to a > directory on A, B is reading it via SCP. > > This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm > currently working to reproduce with standard 9.6) > > We have recently seen a situation where B does not catch up when taken > offline for maintenance. > > When B is started we see the following in the logs: > > 2017-07-27 11:56:03 AEST [21432]: [990-1] user=,db=,client= > (0:00000)LOG: restored log file "0000000C0000005A000000B5" from > archive > scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory > 2017-07-27 11:56:03 AEST [46191]: [1-1] user=,db=,client= > (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on > timeline 12 > 2017-07-27 11:56:03 AEST [46191]: [2-1] user=,db=,client= > (0:XX000)FATAL: could not receive data from WAL stream: ERROR: > requested WAL segment 0000000C0000005A000000B5 has already been > removed > > scp: /archive/xlog//0000000D.history: No such file or directory > scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory > 2017-07-27 11:56:04 AEST [46203]: [1-1] user=,db=,client= > (0:00000)LOG: started streaming WAL from primary at 5A/B5000000 on > timeline 12 > 2017-07-27 11:56:04 AEST [46203]: [2-1] user=,db=,client= > (0:XX000)FATAL: could not receive data from WAL stream: ERROR: > requested WAL segment 0000000C0000005A000000B5 has already been > removed > > This will loop indefinitely. At this stage the master reports no connected > standbys in pg_stat_replication, and the standby has no running WAL > receiver process. > > This can be 'fixed' by running pg_switch_xlog() on the master, at which > time a connection is seen from the standby and the logs show the following: > > scp: /archive/xlog//0000000D.history: No such file or directory > 2017-07-27 12:03:19 AEST [21432]: [1029-1] user=,db=,client= (0:00000)LOG: > restored log file "0000000C0000005A000000B5" from archive > scp: /archive/xlog//0000000C0000005A000000B6: No such file or directory > 2017-07-27 12:03:19 AEST [63141]: [1-1] user=,db=,client= (0:00000)LOG: > started streaming WAL from primary at 5A/B5000000 on timeline 12 > 2017-07-27 12:03:19 AEST [63141]: [2-1] user=,db=,client= (0:XX000)FATAL: > could not receive data from WAL stream: ERROR: requested WAL segment > 0000000C0000005A000000B5 has already been removed > > scp: /archive/xlog//0000000D.history: No such file or directory > 2017-07-27 12:03:24 AEST [21432]: [1030-1] user=,db=,client= (0:00000)LOG: > restored log file "0000000C0000005A000000B5" from archive > 2017-07-27 12:03:24 AEST [21432]: [1031-1] user=,db=,client= (0:00000)LOG: > restored log file "0000000C0000005A000000B6" from archive FWIW, I don't see a bug here. Archiving on its own doesn't guarantee that replication progresses in increments smaller than 16MB, unless you use archive_timeout (or as you do manually switch segments). Streaming replication doesn't guarantee that WAL is retained unless you use replication slots - which you don't appear to be. You can make SR retain more with approximate methods like wal_keep_segments too, but that's not a guarantee. From what I can see you're just seeing the combination of these two limitations, because you don't use the methods to address them (archive_timeout, replication slots and/or wal_keep_segments). Greetings, Andres Freund