Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers
| From | Claudio Freire |
|---|---|
| Subject | Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem |
| Date | |
| Msg-id | CAGTBQpZT=BaMZiob1K8yhh+qtej5RaS7vXnJDS1zGPVCxnoEWA@mail.gmail.com Whole thread Raw |
| In response to | Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem (Masahiko Sawada <sawada.mshk@gmail.com>) |
| Responses |
Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem |
| List | pgsql-hackers |
On Mon, Jan 30, 2017 at 5:51 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> ----
> * We are willing to use at most maintenance_work_mem (or perhaps
> * autovacuum_work_mem) memory space to keep track of dead tuples. We
> * initially allocate an array of TIDs of that size, with an upper limit that
> * depends on table size (this limit ensures we don't allocate a huge area
> * uselessly for vacuuming small tables). If the array threatens to overflow,
>
> I think that we need to update the above paragraph comment at top of
> vacuumlazy.c file.
Indeed, I missed that one. Fixing.
>
> ----
> + numtuples = Max(numtuples,
> MaxHeapTuplesPerPage);
> + numtuples = Min(numtuples, INT_MAX / 2);
> + numtuples = Min(numtuples, 2 *
> pseg->max_dead_tuples);
> + numtuples = Min(numtuples,
> MaxAllocSize / sizeof(ItemPointerData));
> + seg->dt_tids = (ItemPointer)
> palloc(sizeof(ItemPointerData) * numtuples);
>
> Why numtuples is limited to "INT_MAX / 2" but not INT_MAX?
I forgot to mention this one in the OP.
Googling around, I found out some implemetations of bsearch break with
array sizes beyond INT_MAX/2 [1] (they'd overflow when computing the
midpoint).
Before this patch, this bsearch call had no way of reaching that size.
An initial version of the patch (the one that allocated a big array
with huge allocation) could reach that point, though, so I reduced the
limit to play it safe. This latest version is back to the starting
point, since it cannot allocate segments bigger than 1GB, but I opted
to keep playing it safe and leave the reduced limit just in case.
> ----
> @@ -1376,35 +1411,43 @@ lazy_vacuum_heap(Relation onerel, LVRelStats
> *vacrelstats)
> pg_rusage_init(&ru0);
> npages = 0;
>
> - tupindex = 0;
> - while (tupindex < vacrelstats->num_dead_tuples)
> + segindex = 0;
> + tottuples = 0;
> + for (segindex = tupindex = 0; segindex <=
> vacrelstats->dead_tuples.last_seg; tupindex = 0, segindex++)
> {
> - BlockNumber tblk;
> - Buffer buf;
> - Page page;
> - Size freespace;
>
> This is a minute thing but tupindex can be define inside of for loop.
Right, changing.
>
> ----
> @@ -1129,10 +1159,13 @@ lazy_scan_heap(Relation onerel, int options,
> LVRelStats *vacrelstats,
> * instead of doing a second scan.
> */
> if (nindexes == 0 &&
> - vacrelstats->num_dead_tuples > 0)
> + vacrelstats->dead_tuples.num_entries > 0)
> {
> /* Remove tuples from heap */
> - lazy_vacuum_page(onerel, blkno, buf, 0, vacrelstats, &vmbuffer);
> + Assert(vacrelstats->dead_tuples.last_seg == 0); /*
> Should not need more
> + *
> than one segment per
> + * page */
>
> I'm not sure we need to add Assert() here but it seems to me that the
> comment and code is not properly correspond and the comment for
> Assert() should be wrote above of Assert() line.
Well, that assert is the one that found the second bug in
lazy_clear_dead_tuples, so clearly it's not without merit.
I'll rearrange the comments as you ask though.
Updated and rebased v7 attached.
[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776671
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
pgsql-hackers by date: