Thread: Need to calculate total size of a table along with BLOB data assosciated with it.
Need to calculate total size of a table along with BLOB data assosciated with it.
From
girish R G peetle
Date:
Hi,
PostgreSQL system function, pg_total_relation_size() doesn't include BLOB data size that is associated with the table.
TAR dump restore fails if the BLOB data size associated with a table is more than 8GB. (Which is limitation of TAR format).
I know we can switch to compressed dump format. But wanted to know if there a way to know true size of a table.
Thanks
Girish
Re: Need to calculate total size of a table along with BLOB data assosciated with it.
From
Albe Laurenz
Date:
girish R G peetle wrote: > PostgreSQL system function, pg_total_relation_size() doesn't include BLOB data size that is associated > with the table. > > TAR dump restore fails if the BLOB data size associated with a table is more than 8GB. (Which is > limitation of TAR format). > > I know we can switch to compressed dump format. But wanted to know if there a way to know true size of > a table. Not easily, because technically, large objects don't belong to a table. You'd have to write your own that scans the table and adds the size of all referenced large objects. It can probably be done in one SELECT using lo_lseek. Yours, Laurenz Albe
Re: Need to calculate total size of a table along with BLOB data assosciated with it.
From
girish R G peetle
Date:
Got it. Thanks Laurenz.
One thing is little confusing, if large objects don't belong to a table, then how does restriction of 8GB table size for TAR format is applicable if BLOB data is involved.
Is it (Regular Table Data size) + (BLOB data held by OID stored in the table) ?
If just OID of a large object is copied to a different table say 'Table2'. Then for 'Table2' as well should I calculate the total size as (Regular Table Data size) + (BLOB data held by OID stored in the table) ?
Thanks
Girish
On Fri, Sep 18, 2015 at 5:23 PM, Albe Laurenz <laurenz.albe@wien.gv.at> wrote:
girish R G peetle wrote:
> PostgreSQL system function, pg_total_relation_size() doesn't include BLOB data size that is associated
> with the table.
>
> TAR dump restore fails if the BLOB data size associated with a table is more than 8GB. (Which is
> limitation of TAR format).
>
> I know we can switch to compressed dump format. But wanted to know if there a way to know true size of
> a table.
Not easily, because technically, large objects don't belong to a table.
You'd have to write your own that scans the table and adds the size of
all referenced large objects.
It can probably be done in one SELECT using lo_lseek.
Yours,
Laurenz Albe
Re: Need to calculate total size of a table along with BLOB data assosciated with it.
From
Guillaume Lelarge
Date:
2015-09-18 14:43 GMT+02:00 girish R G peetle <giri.anamika0@gmail.com>:
Got it. Thanks Laurenz.One thing is little confusing, if large objects don't belong to a table, then how does restriction of 8GB table size for TAR format is applicable if BLOB data is involved.Is it (Regular Table Data size) + (BLOB data held by OID stored in the table) ?If just OID of a large object is copied to a different table say 'Table2'. Then for 'Table2' as well should I calculate the total size as (Regular Table Data size) + (BLOB data held by OID stored in the table) ?
There's good chance your issue os with the pg_largeobjects table rather than your user table.
ThanksGirishOn Fri, Sep 18, 2015 at 5:23 PM, Albe Laurenz <laurenz.albe@wien.gv.at> wrote:girish R G peetle wrote:
> PostgreSQL system function, pg_total_relation_size() doesn't include BLOB data size that is associated
> with the table.
>
> TAR dump restore fails if the BLOB data size associated with a table is more than 8GB. (Which is
> limitation of TAR format).
>
> I know we can switch to compressed dump format. But wanted to know if there a way to know true size of
> a table.
Not easily, because technically, large objects don't belong to a table.
You'd have to write your own that scans the table and adds the size of
all referenced large objects.
It can probably be done in one SELECT using lo_lseek.
Yours,
Laurenz Albe
--
Re: Need to calculate total size of a table along with BLOB data assosciated with it.
From
Albe Laurenz
Date:
girish R G peetle wrote: > Got it. Thanks Laurenz. > One thing is little confusing, if large objects don't belong to a table, then how does restriction of > 8GB table size for TAR format is applicable if BLOB data is involved. > Is it (Regular Table Data size) + (BLOB data held by OID stored in the table) ? What is the command you use to dump table + large objects? I guess that the large objects make up more than 8 GB and are dumped as a single file. 8 GB is the size limit of a single file in a TAR archive. > If just OID of a large object is copied to a different table say 'Table2'. Then for 'Table2' as well > should I calculate the total size as (Regular Table Data size) + (BLOB data held by OID stored in the > table) ? That's exactly the problem: large objects don't technically belong to the table which references them. If you reference a large object from more than one table, there's no good way of defining to which it belongs. But that's irrelevant to the problem of files in a dump exceeding the limit of 8 GB, isn't it? If you sump with the --blobs option, all large objects in the whole database will be dumped, no matter if they are referenced from a table or not. Yours, Laurenz Albe
Re: Need to calculate total size of a table along with BLOB data assosciated with it.
From
girish R G peetle
Date:
Thanks Laurenz and Guillaume, got clarity on how it works.
Ran few tests and you are right, size restriction applies for pg_catalog.pg_largeobject(as it is the common place where BLOB data is stored) in case of TAR format.
On Mon, Sep 21, 2015 at 2:47 PM, Albe Laurenz <laurenz.albe@wien.gv.at> wrote:
girish R G peetle wrote:
> Got it. Thanks Laurenz.
> One thing is little confusing, if large objects don't belong to a table, then how does restriction of
> 8GB table size for TAR format is applicable if BLOB data is involved.
> Is it (Regular Table Data size) + (BLOB data held by OID stored in the table) ?
What is the command you use to dump table + large objects?
I guess that the large objects make up more than 8 GB and are dumped as a
single file. 8 GB is the size limit of a single file in a TAR archive.
> If just OID of a large object is copied to a different table say 'Table2'. Then for 'Table2' as well
> should I calculate the total size as (Regular Table Data size) + (BLOB data held by OID stored in the
> table) ?
That's exactly the problem: large objects don't technically belong to the table
which references them. If you reference a large object from more than one
table, there's no good way of defining to which it belongs.
But that's irrelevant to the problem of files in a dump exceeding the limit of 8 GB,
isn't it? If you sump with the --blobs option, all large objects in the whole database
will be dumped, no matter if they are referenced from a table or not.
Yours,
Laurenz Albe