Re: more than 2GB data string save - Mailing list pgsql-general

From Pavel Stehule
Subject Re: more than 2GB data string save
Date
Msg-id 162867791002092251x6d5b405ctd5145f08d1d5b377@mail.gmail.com
Whole thread Raw
In response to Re: more than 2GB data string save  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: more than 2GB data string save
List pgsql-general
2010/2/10 Scott Marlowe <scott.marlowe@gmail.com>:
> On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins <steve@blighty.com> wrote:
>>
>> On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
>>
>>> On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman <rummandba@gmail.com> wrote:
>>>> How to save 2 GB or more text string in Postgresql?
>>>> Which data type should I use?
>>>
>>> If you have to you can use either the lo interface, or you can use
>>> bytea.  Large Object (i.e. lo) allows for access much like fopen /
>>> fseek  etc in C, but the actual data are not stored in a row with
>>> other data, but alone in the lo space.  Bytea is a legit type that you
>>> can have as one of many in a row, but you retrieve the whole thing at
>>> once when you get the row.
>>
>> Bytea definitely won't handle more than 1 GB. I don't think the lo interface
>> will handle more than 2GB.
>
> That really depends on how compressible it is, doesn't it?
>

no. It is maximal length for varlena. TOAST is next possible step.

Regards
Pavel Stehule

p.s.

processing very large SQL values - like bytea, or text longer tens
megabytes is very expensive on memory. When you processing 100MB
bytea, then you need about 300MB RAM, Using a bytea over 100MB is not
good idea. LO interface is better and much more faster.



> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

pgsql-general by date:

Previous
From: Greg Smith
Date:
Subject: Re: Best way to handle multi-billion row read-only table?
Next
From: "karsten vennemann"
Date:
Subject: dump of 700 GB database