Re: Perf Benchmarking and regression. - Mailing list pgsql-hackers
From | Ashutosh Sharma |
---|---|
Subject | Re: Perf Benchmarking and regression. |
Date | |
Msg-id | CAE9k0PkFEhVq-Zg4MH0bZ-zt_oE5PAS6dAuxRCXwX9kEVWceag@mail.gmail.com Whole thread Raw |
In response to | Re: Perf Benchmarking and regression. (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: Perf Benchmarking and regression.
Re: Perf Benchmarking and regression. |
List | pgsql-hackers |
Hi,
Please find the test results for the following set of combinations taken at 128 client counts:
1) Unpatched master, default *_flush_after : TPS = 10925.882396
2) Unpatched master, *_flush_after=0 : TPS = 18613.343529
3) That line removed with #if 0, default *_flush_after : TPS = 9856.809278
4) That line removed with #if 0, *_flush_after=0 : TPS = 18158.648023
Here, That line points to "AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL); in pq_init()."
Please note that earlier i had taken readings with data directory and pg_xlog directory at the same location in HDD. But this time i have changed the location of pg_xlog to ssd and taken the readings. With pg_xlog and data directory at the same location in HDD i was seeing much lesser performance like for "That line removed with #if 0, *_flush_after=0 :" case i was getting 7367.709378 tps.
Please find the test results for the following set of combinations taken at 128 client counts:
1) Unpatched master, default *_flush_after : TPS = 10925.882396
2) Unpatched master, *_flush_after=0 : TPS = 18613.343529
3) That line removed with #if 0, default *_flush_after : TPS = 9856.809278
4) That line removed with #if 0, *_flush_after=0 : TPS = 18158.648023
Here, That line points to "AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL); in pq_init()."
Please note that earlier i had taken readings with data directory and pg_xlog directory at the same location in HDD. But this time i have changed the location of pg_xlog to ssd and taken the readings. With pg_xlog and data directory at the same location in HDD i was seeing much lesser performance like for "That line removed with #if 0, *_flush_after=0 :" case i was getting 7367.709378 tps.
Also, the commit-id on which i have taken above readings along with pgbench commands used are mentioned below:
commit 8a13d5e6d1bb9ff9460c72992657077e57e30c32
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Wed May 11 17:06:53 2016 -0400
Fix infer_arbiter_indexes() to not barf on system columns.
Non Default settings and test:
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
./pgbench -i -s 1000 postgres
./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
On Thu, May 12, 2016 at 9:22 AM, Robert Haas <robertmhaas@gmail.com> wrote:
Can you please take four new sets of readings, like this:On Wed, May 11, 2016 at 12:51 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
> I am extremely sorry for the delayed response. As suggested by you, I have
> taken the performance readings at 128 client counts after making the
> following two changes:
>
> 1). Removed AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL,
> NULL); from pq_init(). Below is the git diff for the same.
>
> diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
> index 8d6eb0b..399d54b 100644
> --- a/src/backend/libpq/pqcomm.c
> +++ b/src/backend/libpq/pqcomm.c
> @@ -206,7 +206,9 @@ pq_init(void)
> AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE,
> MyProcPort->sock,
> NULL, NULL);
> AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
> +#if 0
> AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
> +#endif
>
> 2). Disabled the guc vars "bgwriter_flush_after", "checkpointer_flush_after"
> and "backend_flush_after" by setting them to zero.
>
> After doing the above two changes below are the readings i got for 128
> client counts:
>
> CASE : Read-Write Tests when data exceeds shared buffers.
>
> Non Default settings and test
> ./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
> max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c
> checkpoint_completion_target=0.9 &
>
> ./pgbench -i -s 1000 postgres
>
> ./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
>
> Run1 : tps = 9690.678225
> Run2 : tps = 9904.320645
> Run3 : tps = 9943.547176
>
> Please let me know if i need to take readings with other client counts as
> well.
- Unpatched master, default *_flush_after
- Unpatched master, *_flush_after=0
- That line removed with #if 0, default *_flush_after
- That line removed with #if 0, *_flush_after=0
128 clients is fine. But I want to see four sets of numbers that were
all taken by the same person at the same time using the same script.
Thanks,
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
pgsql-hackers by date: