Thread: Copy Command out of memory

Copy Command out of memory

From
Kevin Keith
Date:
I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.

After loading about 3,500,000 records it ran out of memory - I am
assuming because it ran out of space to store such a large transaction.
Does the COPY command offer a similar feature to Oracle's SQL*Loader
where you can specify the number of records to load between commit
statements, or will I have to break the file I am loading into smaller
files?

Or can a transaction be bypassed altogether with the COPY command since
any failure (the load is going to an empty table) could easily be solved
with  a reload of the data anyway.

Thanks,

Kevin

Re: Copy Command out of memory

From
Tom Lane
Date:
Kevin Keith <kkeith@borderware.com> writes:
> I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.
> After loading about 3,500,000 records it ran out of memory - I am
> assuming because it ran out of space to store such a large transaction.

COPY per se shouldn't eat memory.

Perhaps you have some AFTER INSERT triggers on this table (note foreign
key constraints would create such)?  The pending-trigger queue could
have overflowed memory.  Offhand I can't think of another reason.

            regards, tom lane

Re: Copy Command out of memory

From
"Jim C. Nasby"
Date:
On Wed, Dec 14, 2005 at 06:18:07PM -0500, Kevin Keith wrote:
> Or can a transaction be bypassed altogether with the COPY command since
> any failure (the load is going to an empty table) could easily be solved
> with  a reload of the data anyway.

Work along those lines is currently in-progress. Search the -hackers
archive if you're curious.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461