Re: Big data INSERT optimization - ExclusiveLock on extension of the table - Mailing list pgsql-performance

From pinker
Subject Re: Big data INSERT optimization - ExclusiveLock on extension of the table
Date
Msg-id 1471559195448-5917136.post@n5.nabble.com
Whole thread Raw
In response to Re: Big data INSERT optimization - ExclusiveLock on extension of the table  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Responses Re: Re: Big data INSERT optimization - ExclusiveLock on extension of the table
List pgsql-performance

> 1. rename table t01 to t02
OK...
> 2. insert into t02 1M rows in chunks for about 100k
Why not just insert into t01??

Because of cpu utilization, it speeds up when load is divided

> 3. from t01 (previously loaded table) insert data through stored procedure
But you renamed t01 so it no longer exists???
> to b01 - this happens parallel in over a dozen sessions
b01?

that's another table - permanent one

> 4. truncate t01
Huh??

The data were inserted to permanent storage so the temporary table can be
truncated and reused.

Ok, maybe the process is not so important; let's say the table is loaded,
then data are fetched and reloaded to other table through stored procedure
(with it's logic), then the table is truncated and process goes again. The
most important part is holding ExclusiveLocks ~ 1-5s.




--
View this message in context:
http://postgresql.nabble.com/Big-data-INSERT-optimization-ExclusiveLock-on-extension-of-the-table-tp5916781p5917136.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


pgsql-performance by date:

Previous
From: Jim Nasby
Date:
Subject: Re: Estimates on partial index
Next
From: Ashish Kumar Singh
Date:
Subject: Re: Estimates on partial index