RE: Using per-transaction memory contexts for storing decoded tuples - Mailing list pgsql-hackers

From Hayato Kuroda (Fujitsu)
Subject RE: Using per-transaction memory contexts for storing decoded tuples
Date
Msg-id TYAPR01MB5692177C9AA8A7433654009BF5712@TYAPR01MB5692.jpnprd01.prod.outlook.com
Whole thread Raw
In response to Re: Using per-transaction memory contexts for storing decoded tuples  (Masahiko Sawada <sawada.mshk@gmail.com>)
Responses Re: Using per-transaction memory contexts for storing decoded tuples
List pgsql-hackers
Dear Sawada-san, Amit,

> > So, decoding a large transaction with many smaller allocations can
> > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In
> > real workloads, we will have fewer such large transactions or a mix of
> > small and large transactions. That will make the overhead much less
> > visible. Does this mean that we should invent some strategy to defrag
> > the memory at some point during decoding or use any other technique? I
> > don't find this overhead above the threshold to invent something
> > fancy. What do others think?
> 
> I agree that the overhead will be much less visible in real workloads.
> +1 to use a smaller block (i.e. 8kB). It's easy to backpatch to old
> branches (if we agree) and to revert the change in case something
> happens.

I also felt okay. Just to confirm - you do not push rb_mem_block_size patch and
just replace SLAB_LARGE_BLOCK_SIZE -> SLAB_DEFAULT_BLOCK_SIZE, right? It seems that
only reorderbuffer.c uses the LARGE macro so that it can be removed.

Best regards,
Hayato Kuroda
FUJITSU LIMITED


pgsql-hackers by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: On disable_cost
Next
From: Masahiko Sawada
Date:
Subject: Re: Using per-transaction memory contexts for storing decoded tuples