Re: pg_restore : out of memory - Mailing list pgsql-performance

From sathiya psql
Subject Re: pg_restore : out of memory
Date
Msg-id f966c2ee0812190000y698ddb74i7d886b492a01ca7@mail.gmail.com
Whole thread Raw
In response to pg_restore : out of memory  (Franck Routier <franck.routier@axege.com>)
List pgsql-performance


On Thu, Dec 4, 2008 at 7:38 PM, Franck Routier <franck.routier@axege.com> wrote:
Hi,

I am trying to restore a table out of a dump, and I get an 'out of
memory' error.

The table I want to restore is 5GB big.

Here is the exact message :

admaxg@goules:/home/backup-sas$ pg_restore -F c -a -d axabas -t cabmnt
axabas.dmp
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 5492; 0 43701 TABLE
DATA cabmnt axabas
pg_restore: [archiver (db)] COPY failed: ERROR:  out of memory
DETAIL:  Failed on request of size 40.
CONTEXT:  COPY cabmnt, line 9038995: "FHSJ    CPTGEN    RE
200806_004    6.842725E7    6.842725E7    \N    7321100    1101
\N
00016    \N    \N    \N    \N    \N    \N    -1278.620..."
WARNING: errors ignored on restore: 1

Looking at the os level, the process is effectively eating all memory
(incl. swap), that is around 24 GB...
how are you ensuring that it eats up all memory..

post those outputs ?

So, here is my question : is pg_restore supposed to eat all memory ? and
is there something I can do to prevent that ?

Thanks,

Franck



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

pgsql-performance by date:

Previous
From: Craig Ringer
Date:
Subject: Re: pg_restore : out of memory
Next
From: "Mark Wong"
Date:
Subject: dbt-2 tuning results with postgresql-8.3.5