Thread: Re: [HACKERS] [COMMITTERS] pgsql: Permit dump/reload of not-too-large >1GB tuples
Re: [HACKERS] [COMMITTERS] pgsql: Permit dump/reload of not-too-large >1GB tuples
From
Tom Lane
Date:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes: > Permit dump/reload of not-too-large >1GB tuples I noticed that this commit has created an overflow hazard on 64-bit hardware: it's not difficult to attempt to make a tuple exceeding 4GB. You just need a few GB-sized input values. But that's uncool for a couple of reasons: * HeapTupleData.t_len can't represent it (it's only uint32) * heap_form_tuple sets up the tuple to possibly become a composite datum, meaning it applies SET_VARSIZE. That's going to silently overflow for any size above 1GB. In short, as this stands it's a security problem. I'm not sure whether it's appropriate to try to change t_len to type Size to address the first problem. We could try, but I don't know what the fallout would be. Might be better to just add a check disallowing tuple size >= 4GB. As for the second problem, we clearly don't have any prospect of allowing composite datums that exceed 1GB, since they have to have varlena length words. We can allow plain HeapTuples to exceed that, but it'd be necessary to distinguish whether the value being built is going to become a composite datum or not. That requires an API break for heap_form_tuple, or else a separate function to convert a HeapTuple to Datum (and apply a suitable length check). Either way it's a bit of a mess. In any case, a great deal more work is needed before this can possibly be safe. I recommend reverting it pending that. regards, tom lane