Thread: O(n^2) system calls in RemoveOldXlogFiles()
Hi, I noticed that RemoveXlogFile() has this code: /* * Before deleting the file, see if it can be recycled as a future log * segment. Only recycle normal files, pg_standby for example can create * symbolic links pointing to a separate archive directory. */ if (wal_recycle && endlogSegNo <= recycleSegNo && lstat(path, &statbuf) == 0 && S_ISREG(statbuf.st_mode) && InstallXLogFileSegment(&endlogSegNo, path, true, recycleSegNo, true)) { ereport(DEBUG2, (errmsg("recycled write-ahead log file \"%s\"", segname))); CheckpointStats.ckpt_segs_recycled++; /* Needn't recheck that slot on future iterations */ endlogSegNo++; } I didn't check the migration history of this code but it seems that endlogSegNo doesn't currently have the right scoping to achieve the goal of that last comment, so checkpoints finish up repeatedly search for the next free slot, starting at the low end each time, like so: stat("pg_wal/00000001000000000000004F", {st_mode=S_IFREG|0600, st_size=16777216, ...}) = 0 ... stat("pg_wal/000000010000000000000073", 0x7fff98b9e060) = -1 ENOENT (No such file or directory) stat("pg_wal/00000001000000000000004F", {st_mode=S_IFREG|0600, st_size=16777216, ...}) = 0 ... stat("pg_wal/000000010000000000000074", 0x7fff98b9e060) = -1 ENOENT (No such file or directory) ... and so on until we've recycled all our recyclable segments. Ouch.
Hi, On 2021-01-11 16:35:56 +1300, Thomas Munro wrote: > I noticed that RemoveXlogFile() has this code: > > /* > * Before deleting the file, see if it can be recycled as a future log > * segment. Only recycle normal files, pg_standby for example can create > * symbolic links pointing to a separate archive directory. > */ > if (wal_recycle && > endlogSegNo <= recycleSegNo && > lstat(path, &statbuf) == 0 && S_ISREG(statbuf.st_mode) && > InstallXLogFileSegment(&endlogSegNo, path, > true, > recycleSegNo, true)) > { > ereport(DEBUG2, > (errmsg("recycled write-ahead log file \"%s\"", > segname))); > CheckpointStats.ckpt_segs_recycled++; > /* Needn't recheck that slot on future iterations */ > endlogSegNo++; > } > > I didn't check the migration history of this code but it seems that > endlogSegNo doesn't currently have the right scoping to achieve the > goal of that last comment, so checkpoints finish up repeatedly search > for the next free slot, starting at the low end each time, like so: > > stat("pg_wal/00000001000000000000004F", {st_mode=S_IFREG|0600, > st_size=16777216, ...}) = 0 > ... > stat("pg_wal/000000010000000000000073", 0x7fff98b9e060) = -1 ENOENT > (No such file or directory) > > stat("pg_wal/00000001000000000000004F", {st_mode=S_IFREG|0600, > st_size=16777216, ...}) = 0 > ... > stat("pg_wal/000000010000000000000074", 0x7fff98b9e060) = -1 ENOENT > (No such file or directory) > > ... and so on until we've recycled all our recyclable segments. Ouch. I found this before as well: https://postgr.es/m/CAB7nPqTB3VcKSSrW2Qj59tYYR2H4+n=5pZbdWou+X9iqVNMCag@mail.gmail.com I did put a hastily rebased version of that commit in my aio branch during development: https://github.com/anarazel/postgres/commit/b3cc8adacf7860add8cc62ec373ac955d9d12992 Greetings, Andres Freund
On Tue, Jan 12, 2021 at 6:55 PM Andres Freund <andres@anarazel.de> wrote: > I found this before as well: https://postgr.es/m/CAB7nPqTB3VcKSSrW2Qj59tYYR2H4+n=5pZbdWou+X9iqVNMCag@mail.gmail.com Hah, I even knew that, apparently, but forgot. Adding Michael who wrote a patch. It'd be nice to fix this, at least in 14.
On Tue, Jan 12, 2021 at 07:15:21PM +1300, Thomas Munro wrote: > Hah, I even knew that, apparently, but forgot. Adding Michael who > wrote a patch. It'd be nice to fix this, at least in 14. Yeah, this rings a bell. I never went back to it even if the thing looks rather clean at quick glance (not tested), but I may be able to spend some cycles on that. I don't think that's critical enough for a backpatch, so doing something only on HEAD is fine by me. What's your take? -- Michael
Attachment
On Tue, Jan 12, 2021 at 8:27 PM Michael Paquier <michael@paquier.xyz> wrote: > Yeah, this rings a bell. I never went back to it even if the thing > looks rather clean at quick glance (not tested), but I may be able > to spend some cycles on that. I don't think that's critical enough > for a backpatch, so doing something only on HEAD is fine by me. > What's your take? I haven't heard any user complaints, and I'd personally be happy with a fix on master only.
On 2021-Jan-11, Thomas Munro wrote: > I didn't check the migration history of this code but it seems that > endlogSegNo doesn't currently have the right scoping to achieve the > goal of that last comment, so checkpoints finish up repeatedly search > for the next free slot, starting at the low end each time, Apparently b2a5545bd63f changed; before that commit, that code (including the quoted comment) was all in RemoveOldXlogFiles, and endlogSegNo was calculated only once. But ISTM that even with that formulation it had the problem you point out. The real problem is the loop hidden inside InstallXLogFileSegment(). -- Álvaro Herrera
On Tue, Jan 12, 2021 at 11:30:13PM +1300, Thomas Munro wrote: > I haven't heard any user complaints, and I'd personally be happy with > a fix on master only. I have been looking again at that, and the rebased version that Andres has provided would take care of that. Any thoughts? -- Michael
Attachment
On Tue, Jan 12, 2021 at 12:10:24PM -0300, Alvaro Herrera wrote: > Apparently b2a5545bd63f changed; before that commit, that code > (including the quoted comment) was all in RemoveOldXlogFiles, and > endlogSegNo was calculated only once. But ISTM that even with that > formulation it had the problem you point out. The real problem is the > loop hidden inside InstallXLogFileSegment(). I am not sure to get your point here. The purposes of the two incrementations in InstallXLogFileSegment() and RemoveXlogFile() are different, still the former incrementation makes the recycling done by the latter faster, -- Michael
Attachment
On Wed, Jan 13, 2021 at 04:27:25PM +0900, Michael Paquier wrote: > I have been looking again at that, and the rebased version that Andres > has provided would take care of that. Any thoughts? Hearing nothing, I have applied the thing on HEAD after more tests and more reads of the code. -- Michael
Attachment
On Fri, Jan 15, 2021 at 3:07 PM Michael Paquier <michael@paquier.xyz> wrote: > On Wed, Jan 13, 2021 at 04:27:25PM +0900, Michael Paquier wrote: > > I have been looking again at that, and the rebased version that Andres > > has provided would take care of that. Any thoughts? > > Hearing nothing, I have applied the thing on HEAD after more tests and > more reads of the code. Thanks Michael! Another notch for the unnecessary system call hitlist: https://wiki.postgresql.org/wiki/Syscall_Reduction
On Fri, Jan 15, 2021 at 03:25:24PM +1300, Thomas Munro wrote: > Thanks Michael! Another notch for the unnecessary system call > hitlist: https://wiki.postgresql.org/wiki/Syscall_Reduction A quick question. How much does it matter in terms of micro-performance for this code path depending on max/min_wal_size? Andres has mentioned its aio work, without telling any numbers. -- Michael
Attachment
Hi, Thanks for pushing! Sorry to not get around to a review before... On 2021-01-15 11:42:50 +0900, Michael Paquier wrote: > On Fri, Jan 15, 2021 at 03:25:24PM +1300, Thomas Munro wrote: > > Thanks Michael! Another notch for the unnecessary system call > > hitlist: https://wiki.postgresql.org/wiki/Syscall_Reduction > > A quick question. How much does it matter in terms of > micro-performance for this code path depending on max/min_wal_size? I can't see it being a problem here. Disision-by-non-constant is expensive, but syscalls are more expensive. And journalled filesystem operations like renames are *much* more expensive. > Andres has mentioned its aio work, without telling any numbers. I only found it while working on the AIO stuff, it's not really dependant on it. I saw significant performance drops in the last part of a checkpoint in both aio / master, but additional debugging output I had in the aio branch made it obvious that it's not actuall the buffer sync where the time is spent. > "No backpatch is done per the lack of field complaints." Because there's no real way to attribute the slowdown to WAL file recycling in production workloads on master, I don't think we could really expect field complaints. Everyone will just attribute the slowdown to BufferSync() or file sync. I think the way we currently emit WAL timings is quite quite unhelpful. The fact that we attribute CheckpointWriteDelay() to the write time makes it nearly useless until you're at the point the checkpoint can't be completed in time. Similarly, the "sync" time" covers many things that aren't syncing... Greetings, Andres Freund