Re: Serializable Isolation without blocking - Mailing list pgsql-hackers
From | Robert Haas |
---|---|
Subject | Re: Serializable Isolation without blocking |
Date | |
Msg-id | 603c8f070912311217i165de093r42775fcf2ad38887@mail.gmail.com Whole thread Raw |
In response to | Re: Serializable Isolation without blocking ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>) |
Responses |
Re: Serializable Isolation without blocking
|
List | pgsql-hackers |
On Thu, Dec 31, 2009 at 1:43 PM, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: > Robert Haas wrote: >> It seems to me that the hard part of this problem is to describe >> the general mechanism by which conflicts will be detected, with >> specific references to the types of data structures that will be >> used to hold that information. > > Well, the general approach to tracking SIREAD locks I had in mind is > to keep them in the existing lock data structures. I have in mind to > use multiple granularities, with automatic escalation to coarser > granularities at thresholds, to keep RAM usage reasonable. OK. I think it will become more clear whether the existing lock data structures are adequate as you move into detailed design. It doesn't seem critical to make a final decision about that right now. One bad thing about using the existing lock structures is that they are entirely in shared memory, which limits how large they can be. If you should find out that you're going to need more work space than can be conveniently accommodated in shared memory, you will have to think about other options. But I don't know for sure whether that will be the case. The fact that the locks need to be kept around until transactions other than the owner commit is certainly going to drive the size up. > There are > clearly some tough problems with the pluggable indexes, types, > operators, and such that will take time to sort out an acceptable > implementation at any fine-grained level, so my intent it to punt > those to very coarse granularity in the first pass, with "XXX SIREAD > optimization opportunity" comments where that's not a production- > quality solution or it just seems likely that we can do better with > some work. It seems to me that if you lock the heap (either individual rows, or the whole thing) none of that stuff really matters. It might defeat future optimizations such as index-only scans in some cases, and it might create full-table locks in situations where a more intelligent implementation might use less than a full-table lock, but those may be (probably are) prices you are willing to pay. As an overall design comment, I sometimes find that it helps to create a working implementation of something, even if I know that the performance will suck or that the result will not be committable for other reasons. There is often value to that just in terms of getting your head around the parts of the code that need to be modified. I wonder if you couldn't start with something ridiculously poor, like maybe an S2PL implementation with only table-level granularity - just make any operation that reads or writes a table grab an ACCESS EXCLUSIVE lock until transaction commit. Convince yourself that it is CORRECT - forget performance. Then either change the locks to SIREAD, or try to weaken the locks to row-level in certain cases. Then do the other one. It'll take you a while before you have something that can seriously be considered for commit, but that's not the point. The point is you'll have working code that you can fool with. And use git so you can keep merging up to CVS HEAD easily. > And thanks for the feedback. :-) Sure thing. :-) ...Robert
pgsql-hackers by date: