Re: Move PinBuffer and UnpinBuffer to atomics - Mailing list pgsql-hackers
From | Alexander Korotkov |
---|---|
Subject | Re: Move PinBuffer and UnpinBuffer to atomics |
Date | |
Msg-id | CAPpHfdvvoiSbPUprF+XdR8E5mrz+gep4ugPg+X75giGiZgp6QQ@mail.gmail.com Whole thread Raw |
In response to | Re: Move PinBuffer and UnpinBuffer to atomics (Amit Kapila <amit.kapila16@gmail.com>) |
Responses |
Re: Move PinBuffer and UnpinBuffer to atomics
|
List | pgsql-hackers |
On Sun, Apr 10, 2016 at 2:24 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
On Sun, Apr 10, 2016 at 11:33 AM, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:On Sun, Apr 10, 2016 at 8:36 AM, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund <andres@anarazel.de> wrote:
On April 9, 2016 12:43:03 PM PDT, Andres Freund <andres@anarazel.de> wrote:
>On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
>> There are results with 5364b357 reverted.
>
>Crazy that this has such a negative impact. Amit, can you reproduce
>that? Alexander, I guess for r/w workload 5364b357 is a benefit on that
>machine as well?
How sure are you about these measurements?I'm pretty sure. I've retried it multiple times by hand before re-run the script.Because there really shouldn't be clog lookups one a steady state is reached...Hm... I'm also surprised. There shouldn't be clog lookups once hint bits are set.I also tried to run perf top during pgbench and get some interesting results.Without 5364b357:5,69% postgres [.] GetSnapshotData4,47% postgres [.] LWLockAttemptLock3,81% postgres [.] _bt_compare3,42% postgres [.] hash_search_with_hash_value3,08% postgres [.] LWLockRelease2,49% postgres [.] PinBuffer.isra.31,58% postgres [.] AllocSetAlloc1,17% [kernel] [k] __schedule1,15% postgres [.] PostgresMain1,13% libc-2.17.so [.] vfprintf1,01% libc-2.17.so [.] __memcpy_ssse3_backWith 5364b357:18,54% postgres [.] GetSnapshotData3,45% postgres [.] LWLockRelease3,27% postgres [.] LWLockAttemptLock3,21% postgres [.] _bt_compare2,93% postgres [.] hash_search_with_hash_value2,00% postgres [.] PinBuffer.isra.31,32% postgres [.] AllocSetAlloc1,10% libc-2.17.so [.] vfprintfVery surprising. It appears that after 5364b357, GetSnapshotData consumes more time. But I can't see anything depending on clog buffers in GetSnapshotData code...There is a related fact presented by Mithun C Y as well [1] which suggests that Andres's idea of reducing the cost of snapshot shows noticeable gain after increasing the clog buffers. If you read that thread you will notice that initially we didn't notice much gain by that idea, but with increased clog buffers, it started showing noticeable gain. If by any chance, you can apply that patch and see the results (latest patch is at [2]).
I took a look at this thread but I still didn't get why number of clog buffers affects read-only benchmark.
Could you please explain it to me in more details?
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
pgsql-hackers by date: