Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers - Mailing list pgsql-hackers
From | Ashutosh Sharma |
---|---|
Subject | Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers |
Date | |
Msg-id | CAE9k0PkdmKwpdZG9FX_5pZafYCetS814a3WoXA2ng1hzjvWueg@mail.gmail.com Whole thread Raw |
In response to | Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers (Amit Kapila <amit.kapila16@gmail.com>) |
Responses |
Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers
|
List | pgsql-hackers |
Hi All,
I have tried to test 'group_update_clog_v11.1.patch' shared upthread by Amit on a high end machine. I have tested the patch with various savepoints in my test script. The machine details along with test scripts and the test results are shown below,
Machine details:
============
24 sockets, 192 CPU(s)
RAM - 500GB
test script:
========
\set aid random (1,30000000)
\set tid random (1,3000)
BEGIN;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s1;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
SAVEPOINT s2;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s3;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
SAVEPOINT s4;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s5;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
END;
Non-default parameters
==================
max_connections = 200
shared_buffers=8GB
min_wal_size=10GB
max_wal_size=15GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
checkpoint_timeout=900
synchronous_commit=off
pgbench -M prepared -c $thread -j $thread -T $time_for_reading postgres -f ~/test_script.sql
where, time_for_reading = 10 mins
Test Results:
=========
With 3 savepoints
=============
With 5 savepoints
=============
With 7 savepoints
=============
With 10 savepoints
==============
Conclusion:I have tried to test 'group_update_clog_v11.1.patch' shared upthread by Amit on a high end machine. I have tested the patch with various savepoints in my test script. The machine details along with test scripts and the test results are shown below,
Machine details:
============
24 sockets, 192 CPU(s)
RAM - 500GB
test script:
========
\set aid random (1,30000000)
\set tid random (1,3000)
BEGIN;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s1;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
SAVEPOINT s2;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s3;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
SAVEPOINT s4;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s5;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
END;
Non-default parameters
==================
max_connections = 200
shared_buffers=8GB
min_wal_size=10GB
max_wal_size=15GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
checkpoint_timeout=900
synchronous_commit=off
pgbench -M prepared -c $thread -j $thread -T $time_for_reading postgres -f ~/test_script.sql
where, time_for_reading = 10 mins
Test Results:
=========
With 3 savepoints
=============
CLIENT COUNT | TPS (HEAD) | TPS (PATCH) | % IMPROVEMENT |
128 | 50275 | 53704 | 6.82048732 |
64 | 62860 | 66561 | 5.887686923 |
8 | 18464 | 18752 | 1.559792028 |
With 5 savepoints
=============
CLIENT COUNT | TPS (HEAD) | TPS (PATCH) | % IMPROVEMENT |
128 | 46559 | 47715 | 2.482871196 |
64 | 52306 | 52082 | -0.4282491492 |
8 | 12289 | 12852 | 4.581332899 |
With 7 savepoints
=============
CLIENT COUNT | TPS (HEAD) | TPS (PATCH) | % IMPROVEMENT |
128 | 41367 | 41500 | 0.3215123166 |
64 | 42996 | 41473 | -3.542189971 |
8 | 9665 | 9657 | -0.0827728919 |
With 10 savepoints
==============
CLIENT COUNT | TPS (HEAD) | TPS (PATCH) | % IMPROVEMENT |
128 | 34513 | 34597 | 0.24338655 |
64 | 32581 | 32035 | -1.675823333 |
8 | 7293 | 7622 | 4.511175099 |
On Tue, Mar 21, 2017 at 6:19 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Mar 20, 2017 at 8:27 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Mar 17, 2017 at 2:30 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> I was wondering about doing an explicit test: if the XID being
>>> committed matches the one in the PGPROC, and nsubxids matches, and the
>>> actual list of XIDs matches, then apply the optimization. That could
>>> replace the logic that you've proposed to exclude non-commit cases,
>>> gxact cases, etc. and it seems fundamentally safer. But it might be a
>>> more expensive test, too, so I'm not sure.
>>
>> I think if the number of subxids is very small let us say under 5 or
>> so, then such a check might not matter, but otherwise it could be
>> expensive.
>
> We could find out by testing it. We could also restrict the
> optimization to cases with just a few subxids, because if you've got a
> large number of subxids this optimization probably isn't buying much
> anyway.
>
Yes, and I have modified the patch to compare xids and subxids for
group update. In the initial short tests (with few client counts), it
seems like till 3 savepoints we can win and 10 savepoints onwards
there is some regression or at the very least there doesn't appear to
be any benefit. We need more tests to identify what is the safe
number, but I thought it is better to share the patch to see if we
agree on the changes because if not, then the whole testing needs to
be repeated. Let me know what do you think about attached?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
pgsql-hackers by date: