Re: Bug: Buffer cache is not scan resistant - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Re: Bug: Buffer cache is not scan resistant |
Date | |
Msg-id | 20614.1173119085@sss.pgh.pa.us Whole thread Raw |
In response to | Re: Bug: Buffer cache is not scan resistant (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: Bug: Buffer cache is not scan resistant
Re: Bug: Buffer cache is not scan resistant Re: Bug: Buffer cache is not scan resistant Re: Bug: Buffer cache is not scan resistant |
List | pgsql-hackers |
I wrote: > "Pavan Deolasee" <pavan@enterprisedb.com> writes: >> Isn't the size of the shared buffer pool itself acting as a performance >> penalty in this case ? May be StrategyGetBuffer() needs to make multiple >> passes over the buffers before the usage_count of any buffer is reduced >> to zero and the buffer is chosen as replacement victim. > I read that and thought you were onto something, but it's not acting > quite the way I expect. I made a quick hack in StrategyGetBuffer() to > count the number of buffers it looks at before finding a victim. > ... > Yes, autovacuum is off, and bgwriter shouldn't have anything useful to > do either, so I'm a bit at a loss what's going on --- but in any case, > it doesn't look like we are cycling through the entire buffer space > for each fetch. Nope, Pavan's nailed it: the problem is that after using a buffer, the seqscan leaves it with usage_count = 1, which means it has to be passed over once by the clock sweep before it can be re-used. I was misled in the 32-buffer case because catalog accesses during startup had left the buffer state pretty confused, so that there was no long stretch before hitting something available. With a large number of buffers, the behavior is that the seqscan fills all of shared memory with buffers having usage_count 1. Once the clock sweep returns to the first of these buffers, it will have to pass over all of them, reducing all of their counts to 0, before it returns to the first one and finds it now usable. Subsequent tries find a buffer immediately, of course, until we have again filled shared_buffers with usage_count 1 everywhere. So the problem is not so much the clock sweep overhead as that it's paid in a very nonuniform fashion: with N buffers you pay O(N) once every N reads and O(1) the rest of the time. This is no doubt slowing things down enough to delay that one read, instead of leaving it nicely I/O bound all the time. Mark, can you detect "hiccups" in the read rate using your setup? I seem to recall that we've previously discussed the idea of letting the clock sweep decrement the usage_count before testing for 0, so that a buffer could be reused on the first sweep after it was initially used, but that we rejected it as being a bad idea. But at least with large shared_buffers it doesn't sound like such a bad idea. Another issue nearby to this is whether to avoid selecting buffers that are dirty --- IIRC someone brought that up again recently. Maybe predecrement for clean buffers, postdecrement for dirty ones would be a cute compromise. regards, tom lane
pgsql-hackers by date: