Re: WAL segments removed from primary despite the fact that logical replication slot needs it. - Mailing list pgsql-bugs
From | hubert depesz lubaczewski |
---|---|
Subject | Re: WAL segments removed from primary despite the fact that logical replication slot needs it. |
Date | |
Msg-id | Y3Climc7l24CD3vn@depesz.com Whole thread Raw |
In response to | Re: WAL segments removed from primary despite the fact that logical replication slot needs it. (hubert depesz lubaczewski <depesz@depesz.com>) |
Responses |
Re: WAL segments removed from primary despite the fact that logical replication slot needs it.
|
List | pgsql-bugs |
On Fri, Nov 11, 2022 at 03:50:40PM +0100, hubert depesz lubaczewski wrote: > #v+ > 2022-11-11 12:45:26.432 UTC,,,994963,,636e43e6.f2e93,2,,2022-11-11 12:45:26 UTC,6/0,0,ERROR,08P01,"could not receive datafrom WAL stream: ERROR: requested WAL segment 000000010001039D00000083 has already been removed",,,,,,,,,"","logicalreplication worker",,0 > #v- Sooo... plot thickens. Without any changes, manual rebuild or anything, yesterday, the problem seems to have solved itself?! In logs on focal/pg14 I see: #v+ 2022-11-12 20:55:39.190 UTC,,,1897563,,6370084b.1cf45b,2,,2022-11-12 20:55:39 UTC,6/0,0,ERROR,08P01,"could not receive datafrom WAL stream: ERROR: requested WAL segment 000000010001039D00000083 has already been removed",,,,,,,,,"","logicalreplication worker",,0 #v- And this is *the last* such message. On bionic/pg12 we have in logs from pg_replication_slots: #v+ timestamp pg_current_wal_lsn slot_name plugin slot_type datoid database temporary active active_pid xmin catalog_xmin restart_lsn confirmed_flush_lsn 2022-11-12 20:51:00 UTC 1041E/D3A0E540 focal14 pgoutput logical 16607 canvas f f \N \N 3241443528 1039D/83825958 1039D/96453F38 2022-11-12 20:51:59 UTC 1041E/D89B6000 focal14 pgoutput logical 16607 canvas f f \N \N 3241443528 1039D/83825958 1039D/96453F38 2022-11-12 20:52:58 UTC 1041E/E0547450 focal14 pgoutput logical 16607 canvas f f \N \N 3241443528 1039D/83825958 1039D/96453F38 2022-11-12 20:53:58 UTC 1041E/E59634F0 focal14 pgoutput logical 16607 canvas f f \N \N 3241443528 1039D/83825958 1039D/96453F38 2022-11-12 20:54:57 UTC 1041E/EBB50DE8 focal14 pgoutput logical 16607 canvas f f \N \N 3241443528 1039D/83825958 1039D/96453F38 2022-11-12 20:55:55 UTC 1041E/FBBC3160 focal14 pgoutput logical 16607 canvas f t 18626 \N 3241450490 1039D/9170B010 1039D/9B86EAF0 2022-11-12 20:56:55 UTC 1041F/FF34000 focal14 pgoutput logical 16607 canvas f t 18626 \N 3241469432 1039D/A21B4598 1039D/A928A6D0 2022-11-12 20:57:54 UTC 1041F/277FAE30 focal14 pgoutput logical 16607 canvas f t 18626 \N 3241480448 1039D/AD4C7A00 1039D/BA3FC840 2022-11-12 20:58:53 UTC 1041F/319A0000 focal14 pgoutput logical 16607 canvas f t 18626 \N 3241505462 1039D/C5C32398 1039D/DF899428 2022-11-12 20:59:52 UTC 1041F/3A399688 focal14 pgoutput logical 16607 canvas f t 18626 \N 3241527824 1039D/F3393280 1039E/AD9A740 ... 2022-11-13 08:00:44 UTC 1043F/E464B738 focal14 pgoutput logical 16607 canvas f t 18626 \N 3292625412 1043F/E0F38E78 1043F/E4609628 #v- I have no idea what has changed or why. If it helps I can provide logs, but would prefer to do it off list. Also, while the problem, with this cluster is "solved", I still have like 300 other clusters to upgrade, and at least 1 has hit the same problem today. Best regards, depesz
pgsql-bugs by date: