Re: pg_dump out of shared memory - Mailing list pgsql-general

From tfo@alumni.brown.edu (Thomas F. O'Connell)
Subject Re: pg_dump out of shared memory
Date
Msg-id 80c38bb1.0406210707.50894a15@posting.google.com
Whole thread Raw
In response to pg_dump out of shared memory  (tfo@alumni.brown.edu (Thomas F. O'Connell))
Responses Re: pg_dump out of shared memory
List pgsql-general
tfo@alumni.brown.edu (Thomas F. O'Connell) wrote in message news:
> postgresql.conf just has the default of 1000 shared_buffers. The
> database itself has thousands of tables, some of which have rows
> numbering in the millions. Am I correct in thinking that, despite the
> hint, it's more likely that I need to up the shared_buffers?

So the answer here, verified by Tom Lane and my own remedy to the
problem, is "no". Now I'm curious: why does pg_dump require that
max_connections * max_shared_locks_per_transaction be greater than the
number of objects in the database? Or if that's not the right
assumption about how pg_dump is working, how does pg_dump obtain its
locks, and why is the error that it runs out of shared memory? Is
there a portion of shared memory that's set aside for locks? What is
the shared lock table?

-tfo

pgsql-general by date:

Previous
From: Adam Smith
Date:
Subject: Re: [ADMIN] Is this a "Stupid Question" ?
Next
From: Alvaro Herrera
Date:
Subject: Re: ERROR: tables can have at most 1600 columns