Re: pg_stat_statements issue with parallel maintenance (Was Re: WALusage calculation patch) - Mailing list pgsql-hackers
From | Dilip Kumar |
---|---|
Subject | Re: pg_stat_statements issue with parallel maintenance (Was Re: WALusage calculation patch) |
Date | |
Msg-id | CAFiTN-tBcyzpCPs4hMeF_MnNeMszboPeooD0Z9-Txtvzjh=jLA@mail.gmail.com Whole thread Raw |
In response to | Re: pg_stat_statements issue with parallel maintenance (Was Re: WALusage calculation patch) (Masahiko Sawada <masahiko.sawada@2ndquadrant.com>) |
Responses |
Re: pg_stat_statements issue with parallel maintenance (Was Re: WALusage calculation patch)
Re: pg_stat_statements issue with parallel maintenance (Was Re: WALusage calculation patch) |
List | pgsql-hackers |
On Wed, Apr 1, 2020 at 8:26 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote: > > On Wed, 1 Apr 2020 at 11:46, Amit Kapila <amit.kapila16@gmail.com> wrote: > > > > On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote: > > > > > > While testing I have found one issue. Basically, during a parallel > > > vacuum, it was showing more number of > > > shared_blk_hits+shared_blks_read. After, some investigation, I found > > > that during the cleanup phase nworkers are -1, and because of this we > > > didn't try to launch worker but "lps->pcxt->nworkers_launched" had the > > > old launched worker count and shared memory also had old buffer read > > > data which was never updated as we did not try to launch the worker. > > > > > > diff --git a/src/backend/access/heap/vacuumlazy.c > > > b/src/backend/access/heap/vacuumlazy.c > > > index b97b678..5dfaf4d 100644 > > > --- a/src/backend/access/heap/vacuumlazy.c > > > +++ b/src/backend/access/heap/vacuumlazy.c > > > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel, > > > IndexBulkDeleteResult **stats, > > > * Next, accumulate buffer usage. (This must wait for the workers to > > > * finish, or we might get incomplete data.) > > > */ > > > - for (i = 0; i < lps->pcxt->nworkers_launched; i++) > > > + nworkers = Min(nworkers, lps->pcxt->nworkers_launched); > > > + for (i = 0; i < nworkers; i++) > > > InstrAccumParallelQuery(&lps->buffer_usage[i]); > > > > > > It worked after the above fix. > > > > > > > Good catch. I think we should not even call > > WaitForParallelWorkersToFinish for such a case. So, I guess the fix > > could be, > > > > if (workers > 0) > > { > > WaitForParallelWorkersToFinish(); > > for (i = 0; i < lps->pcxt->nworkers_launched; i++) > > InstrAccumParallelQuery(&lps->buffer_usage[i]); > > } > > > > Agreed. I've attached the updated patch. > > Thank you for testing, Dilip! Thanks! One hunk is failing on the latest head. And, I have rebased the patch for my testing so posting the same. I have done some more testing to test multi-pass vacuum. postgres[114321]=# show maintenance_work_mem ; maintenance_work_mem ---------------------- 1MB (1 row) --Test case select pg_stat_statements_reset(); drop table test; CREATE TABLE test (a int, b int); CREATE INDEX idx1 on test(a); CREATE INDEX idx2 on test(b); INSERT INTO test SELECT i, i FROM GENERATE_SERIES(1,2000000) as i; DELETE FROM test where a%2=0; VACUUM (PARALLEL n) test; select query, total_time, shared_blks_hit, shared_blks_read, shared_blks_hit + shared_blks_read as total_read_blks, shared_blks_dirtied, shared_blks_written from pg_stat_statements where query like 'VACUUM%'; query | total_time | shared_blks_hit | shared_blks_read | total_read_blks | shared_blks_dirtied | shared_blks_written --------------------------+-------------+-----------------+------------------+-----------------+---------------------+--------------------- VACUUM (PARALLEL 0) test | 5964.282408 | 92447 | 6 | 92453 | 19789 | 0 query | total_time | shared_blks_hit | shared_blks_read | total_read_blks | shared_blks_dirtied | shared_blks_written --------------------------+--------------------+-----------------+------------------+-----------------+---------------------+--------------------- VACUUM (PARALLEL 1) test | 3957.7658810000003 | 92447 | 6 | 92453 | 19789 | 0 (1 row) So I am getting correct results with the multi-pass vacuum. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com
Attachment
pgsql-hackers by date: