Thread: Understanding oid for a record
I ran into a situation with the query that needs to fetch the last record from the table.
The table schema wasn't really designed to support this, (no serial id, or anything else I could use)
so SQL makes use of max(create_date) which has a resolution of 1 second and thus allows for duplicate values.
I was thinking about using max(oid), but I don't know if there's any rotation or recycling possible there...
Can I assume that oid for any new record will be larger than oid on any other existing rows?
Thanks,
Michael.
Brusser, Michael wrote: > I ran into a situation with the query that needs to fetch the last > record from the table. > The table schema wasn't really designed to support this, (no serial id, > or anything else I could use) > so SQL makes use of max(create_date) which has a resolution of 1 second > and thus allows for duplicate values. > > I was thinking about using max(oid), but I don't know if there's any > rotation or recycling possible there... > Can I assume that oid for any new record will be larger than oid on any > other existing rows? You probably don't have an OID if you're using a recent version of PG (unless you specifically asked for one). If you do, you're not guaranteed ordering. Or uniqueness IIRC. You can increase the accuracy of the timestamp - that might help, but you probably need to think about what you mean by "last". Oh, and this question isn't really for the hackers list. Perhaps the general or sql lists instead? -- Richard Huxton Archonet Ltd
"Brusser, Michael" <Michael.Brusser@matrixone.com> writes: > Can I assume that oid for any new record will be larger than oid on any > other existing rows? No; that will fail once the installation has consumed 4G OIDs. The counter just wraps around. However, if you pay attention to the OID value returned in the INSERT command result, you could fetch that and not have to assume anything about max(). You will want to create a unique index on OID if you are using it this way, first for efficiency of the fetch and second to prevent the creation of duplicate entries after wraparound. If you expect more than perhaps 100M rows in the table, you probably need to think of another way, because of the likelihood of OID collisions. regards, tom lane