Re: Performance Improvement by reducing WAL for Update Operation - Mailing list pgsql-hackers
From | Amit Kapila |
---|---|
Subject | Re: Performance Improvement by reducing WAL for Update Operation |
Date | |
Msg-id | 003801ce03b8$e824ba70$b86e2f50$@kapila@huawei.com Whole thread Raw |
In response to | Re: Performance Improvement by reducing WAL for Update Operation (Amit Kapila <amit.kapila@huawei.com>) |
Responses |
Re: Performance Improvement by reducing WAL for Update
Operation
|
List | pgsql-hackers |
On Friday, February 01, 2013 6:37 PM Amit Kapila wrote: > On Thursday, January 31, 2013 6:44 PM Amit Kapila wrote: > > On Wednesday, January 30, 2013 8:32 PM Amit Kapila wrote: > > > On Tuesday, January 29, 2013 7:42 PM Amit Kapila wrote: > > > > On Tuesday, January 29, 2013 3:53 PM Heikki Linnakangas wrote: > > > > > On 29.01.2013 11:58, Amit Kapila wrote: > > > > > > Can there be another way with which current patch code can be > > > made > > > > > better, > > > > > > so that we don't need to change the encoding approach, as I > am > > > > having > > > > > > feeling that this might not be performance wise equally good. > > > > > > > > > > The point is that I don't want to heap_delta_encode() to know > > > > > the internals of pglz compression. You could probably make my > > > > > patch > > > more > > > > > like yours in behavior by also passing an array of offsets in > > > > > the new tuple to check, and only checking for matches as those > > offsets. > > > > > > > > I think it makes sense, because if we have offsets of both new > and > > > old > > > > tuple, we can internally use memcmp to compare columns and use > > > > same algorithm for encoding. > > > > I will change the patch according to this suggestion. > > > > > > I have modified the patch as per above suggestion. > > > Apart from passing new and old tuple offsets, I have passed > > > bitmaplength also, as we need to copy the bitmap of new tuple as it > > is > > > into Encoded WAL Tuple. > > > > > > Please see if such API design is okay? > > > > > > I shall update the README and send the performance/WAL Reduction > > > data for modified patch tomorrow. > > > > Updated patch including comments and README is attached with this > mail. > > This patch contain exactly same design behavior as per previous. > > It takes care of API design suggestion of Heikki. > > > > The performance data is similar, as it is not complete, I shall send > > that tomorrow. > > Performance data for the patch is attached with this mail. > Conclusions from the readings (these are same as my previous patch): > > 1. With orignal pgbench there is a max 7% WAL reduction with not much > performance difference. > 2. With 250 record pgbench there is a max wal reduction of 35% with not > much performance difference. > 3. With 500 and above record size in pgbench there is an improvement in > the performance and wal reduction both. > > If the record size increases there is a gain in performance and wal > size is reduced as well. > > Performance data for synchronous_commit = on is under progress, I shall > post it once it is done. > I am expecting it to be same as previous. Please find the performance readings for synchronous_commit = on. Each run is taken for 20 min. Conclusions from the readings with synchronous commit on mode: 1. With orignal pgbench there is a max 2% WAL reduction with not much performance difference. 2. With 500 record pgbench there is a max wal reduction of 3% with not much performance difference. 3. With 1800 record size in pgbench there is both an improvement in the performance (approx 3%) as well as wal reduction (44%). If the record size increases there is a very good reduction in WAL size. Please provide your feedback. With Regards, Amit Kapila.
pgsql-hackers by date: