Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date
Msg-id CAMkU=1y8ZBMMapk5i1BgsMHQZsaxDCO=UEKWnu6J=XEjQ-gpAw@mail.gmail.com
Whole thread Raw
In response to Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Stephen Frost <sfrost@snowman.net>)
Responses Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
List pgsql-hackers
On Sat, Jun 22, 2013 at 12:46 AM, Stephen Frost <sfrost@snowman.net> wrote:
Noah,

* Noah Misch (noah@leadboat.com) wrote:
> This patch introduces MemoryContextAllocHuge() and repalloc_huge() that check
> a higher MaxAllocHugeSize limit of SIZE_MAX/2.

Nice!  I've complained about this limit a few different times and just
never got around to addressing it.

> This was made easier by tuplesort growth algorithm improvements in commit
> 8ae35e91807508872cabd3b0e8db35fc78e194ac.  The problem has come up before
> (TODO item "Allow sorts to use more available memory"), and Tom floated the
> idea[1] behind the approach I've used.  The next limit faced by sorts is
> INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> 150 GiB when sorting int4.

That's frustratingly small. :(

I've added a ToDo item to remove that limit from sorts as well.

I was going to add another item to make nodeHash.c use the new huge allocator, but after looking at it just now it was not clear to me that it even has such a limitation.  nbatch is limited by MaxAllocSize, but nbuckets doesn't seem to be.

Cheers,

Jeff

pgsql-hackers by date:

Previous
From: Dean Rasheed
Date:
Subject: Re: MD5 aggregate
Next
From: Claudio Freire
Date:
Subject: Re: Hash partitioning.