Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE - Mailing list pgsql-hackers
From | Robert Haas |
---|---|
Subject | Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE |
Date | |
Msg-id | CA+TgmobHNkPRcAWuh2S7ftJE4DKzzG_yn+e4qu1kuAB04jieqQ@mail.gmail.com Whole thread Raw |
In response to | Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE (Peter Geoghegan <pg@heroku.com>) |
Responses |
Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE
Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE |
List | pgsql-hackers |
On Fri, Sep 20, 2013 at 8:48 PM, Peter Geoghegan <pg@heroku.com> wrote: > On Tue, Sep 17, 2013 at 9:29 AM, Robert Haas <robertmhaas@gmail.com> wrote: >> On Sat, Sep 14, 2013 at 6:27 PM, Peter Geoghegan <pg@heroku.com> wrote: >>> Note that today there is no guarantee that the original waiter for a >>> duplicate-inserting xact to complete will be the first one to get a >>> second chance > >> ProcLockWakeup() only wakes as many waiters from the head of the queue >> as can all be granted the lock without any conflicts. So I don't >> think there is a race condition in that path. > > Right, but what about XactLockTableWait() itself? It only acquires a > ShareLock on the xid of the got-there-first inserter that potentially > hasn't yet committed/aborted. That's an interesting point. As you pointed out in later emails, that cases is handled for heap tuple locks, but btree uniqueness conflicts are a different kettle of fish. > Yeah, you're right. As I mentioned to Andres already, when row locking > happens and there is this kind of conflict, my approach is to retry > from scratch (go right back to before value lock acquisition) in the > sort of scenario that generally necessitates EvalPlanQual() looping, > or to throw a serialization failure where that's appropriate. After an > unsuccessful attempt at row locking there could well be an interim > wait for another xact to finish, before retrying (at read committed > isolation level). This is why I think that value locking/retrying > should be cheap, and should avoid bloat if at all possible. > > Forgive me if I'm making a leap here, but it seems like what you're > saying is that the semantics of upsert that one might naturally expect > are *arguably* fundamentally impossible, because they entail > potentially locking a row that isn't current to your snapshot, Precisely. > and you cannot throw a serialization failure at read committed. Not sure that's true, but at least it might not be the most desirable behavior. > I respectfully > suggest that that exact definition of upsert isn't a useful one, > because other snapshot isolation/MVCC systems operating within the > same constraints must have the same issues, and yet they manage to > implement something that could be called upsert that people seem happy > with. Yeah. I wonder how they do that. > I wouldn't go that far. The number of possible additional primitives > that are useful isn't that high, unless we decide that LWLocks are > going to be a fundamentally different thing, which I consider > unlikely. I'm not convinced, but we can save that argument for another day. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
pgsql-hackers by date: