Re: User-facing aspects of serializable transactions - Mailing list pgsql-hackers
From | Kevin Grittner |
---|---|
Subject | Re: User-facing aspects of serializable transactions |
Date | |
Msg-id | 4A1E5BF0.EE98.0025.1@wicourts.gov Whole thread Raw |
In response to | Re: User-facing aspects of serializable transactions (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>) |
Responses |
Re: User-facing aspects of serializable transactions
|
List | pgsql-hackers |
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: > 1. Needs to be fully spec-compliant serializable behavior. No > anomalities. That is what the paper describes, and where I want to end up. > 2. No locking that's not absolutely necessary, regardless of the > WHERE-clause used. No table locks, no page locks. Block only on > queries/updates that would truly conflict with concurrent updates If you do a table scan, how do you not use a table lock? Also, the proposal is to *not* block in *any* cases beyond where snapshot isolation currently blocks. None. Period. This is the big difference from traditional techniques to achieve serializable transactions. > 3. No "serialization errors" that are not strictly necessary. That would require either the blocking approach which is has traditionally been used, or a rigorous graphing of all read-write dependencies (or anti-dependencies, depending on whose terminology you prefer). I expect either approach would perform much worse than what the techniques in the paper. Published benchmarks, some confirmed by an ACM Repeatability Committee, have so far validated that intuition. > 4. Reasonable performance. Performance in single-backend case should > be indistinguishable from what we have now and what we have with the > more lenient isolation levels. This should have no impact on performance for those not choosing serializable transactions. Benchmarks of the proposed technique have so far shown performance ranging from marginally better than snapshot to 15% below snapshot, whith traditional serializable techniques benchmarking as much as 70% below snapshot. > 5. Reasonable scalability. Shouldn't slow down noticeably when > concurrent updaters are added as long as they don't conflict. That should be no problem for this technique. > 6. No tuning knobs. It should just work. Well, I think some tuning knobs might be useful, but we can certainly offer working defaults. Whether they should be exposed as knobs to the users or kept away from their control depends, in my view, on how much benefit there is to tweaking them for different environments and how big a foot-gun they represent. "No tuning knobs" seems an odd requirement to put on this one feature versus all other new features. > Now let's discuss implementation. It may well be that there is no > solution that totally satisfies all those requirements, so there's > plenty of room for various tradeoffs to discuss. Then they seem more like "desirable characteristics" than requirements, but OK. > I think fully spec-compliant behavior is a hard requirement, or > we'll find ourselves adding yet another isolation level in the next > release to achieve it. The others are negotiable. There's an odd dichotomy to direction given in this area. On the one hand, I often see the advice to submit small patches which advance toward a goal without breaking anything, but then I see statements like this, which seem at odds with that notion. My personal inclination is to have a GUC (perhaps eliminated after the implementation is complete, performant, and well-tested) to enable the new techniques, initially defaulted to "off". There is a pretty clear path to a mature implementation through a series of iterations. That seems at least one order of magnitude more likely to succeed than trying to come up with a single, final patch. -Kevin
pgsql-hackers by date: