Re: Physical sites handling large data - Mailing list pgsql-general

From Tom Lane
Subject Re: Physical sites handling large data
Date
Msg-id 29036.1032104867@sss.pgh.pa.us
Whole thread Raw
In response to Re: Physical sites handling large data  (Ericson Smith <eric@did-it.com>)
Responses Re: Physical sites handling large data
List pgsql-general
Ericson Smith <eric@did-it.com> writes:
> Using the bigmem kernel and RH7.3, we were able to set Postgresql shared
> memory to 3.2Gigs (out of 6GB Ram). Does this mean that Postgresql will
> only use the first 2Gigs?

I think you are skating on thin ice there --- there must have been some
integer overflows in the shmem size calculations.  It evidently worked
as an unsigned result, but...

IIRC we have an open bug report from someone who tried to set
shared_buffers so large that the shmem size would have been ~5GB;
the overflowed size request was ~1GB and then it promptly dumped
core from trying to access memory beyond that.  We need to put in
some code to detect overflows in those size calculations.

In any case, pushing PG's shared memory to 50% of physical RAM is
completely counterproductive.  See past discussions (mostly on
-hackers and -admin if memory serves) about appropriate sizing of
shared buffers.  There are different schools of thought about this,
but I think everyone agrees that a shared-buffer pool that's roughly
equal to the size of the kernel's disk buffer cache is a waste of
memory.  One should be much bigger than the other.  I personally think
it's appropriate to let the kernel cache do most of the work, and so
I favor a shared_buffers setting of just a few thousand.

            regards, tom lane

pgsql-general by date:

Previous
From: Ericson Smith
Date:
Subject: Re: Physical sites handling large data
Next
From: "Nigel J. Andrews"
Date:
Subject: Re: Can't run configure