Thread: Max Tuple Size
We want to insert text into a table. Large amounts of text. The limit on the version we have at the moment is around 9000 characters. I'd estimate we'd need to be able to insert around 20,000 to 50,000 characters. Does Version 7.0.3 have a limit, is 7.1 the only version with no limit. Are there confiuration settings in postgres to set a limit? or do I just have to use blobs and lobs :(
On Fri, 6 Apr 2001, Aarmel wrote: > We want to insert text into a table. Large amounts of text. > > The limit on the version we have at the moment is around 9000 characters. > I'd estimate we'd need to be able to insert around 20,000 to 50,000 > characters. > > Does Version 7.0.3 have a limit, is 7.1 the only version with no limit. 7.0.3 has a limit of 8KB by default. Version 7.1 does not have any limit. > > Are there confiuration settings in postgres to set a limit? or do I just > have to use blobs and lobs :( I wouldn't recommende blobs/lobs. You can recompile 7.0.3 with a BLOCKSIZE of 32KB instead of 8KB or you can use version 7.1RC2 which you can find in ftp://ftp.postgresql.org/pub/dev The 7.1 Release Candidate 2 is quite stable. HTH, Poul L. Christiansen
Aarmel wrote: > > We want to insert text into a table. Large amounts of text. > > The limit on the version we have at the moment is around 9000 characters. > I'd estimate we'd need to be able to insert around 20,000 to 50,000 > characters. > > Does Version 7.0.3 have a limit, is 7.1 the only version with no limit. > > Are there confiuration settings in postgres to set a limit? or do I just > have to use blobs and lobs :( In 7.0.3 you can use LZTEXT which gives good compression for most strings, managing to fit 50k in with that should be no problem if it is english (or other) language text. I've successfully stuffed over 200k into an LZTEXT field if it is especially compressible. You can also increase the blocksize to 32k if you compile from source and re-initialise your database. Combining this with LZTEXT gives a corresponding increase. In 7.1 the limit is increased through arcane magic to (I think) around 2GB, possibly more, if you can make assumptions like "it won't be indexed". Even in 7.0.3 you can only index fields up to blocksize/3. Cheers, Andrew. -- _____________________________________________________________________ Andrew McMillan, e-mail: Andrew@catalyst.net.nz Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington Me: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267
Andrew McMillan <andrew@catalyst.net.nz> writes: > In 7.0.3 you can use LZTEXT which gives good compression for most > strings, managing to fit 50k in with that should be no problem if it is > english (or other) language text. I've successfully stuffed over 200k > into an LZTEXT field if it is especially compressible. That seems overly optimistic to me, I'd not expect LZTEXT to give more than about a factor of 2 compression on average. > In 7.1 the limit is increased through arcane magic to (I think) around > 2GB, possibly more, if you can make assumptions like "it won't be > indexed". Even in 7.0.3 you can only index fields up to blocksize/3. Just for the record, the hard upper limit on field size in 7.1 is 1GB (in practice you probably don't want to go past a few megabytes). As Andrew says, if the data is to be indexed then it's less, since btree still has a one-third-page record-size limit. However that limit is after LZ compression, so in practice you could index fields with widths ranging up to perhaps 2/3 blocksize --- say 20K if you set BLCKSZ = 32K. regards, tom lane