Re: Stating the significance of Lehman & Yao in the nbtree README - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: Stating the significance of Lehman & Yao in the nbtree README
Date
Msg-id CAM3SWZSUmb5c-8xGQspwK6MUwidL8c8vmrg-L4n3Z06Oap3Uaw@mail.gmail.com
Whole thread Raw
In response to Re: Stating the significance of Lehman & Yao in the nbtree README  (Kevin Grittner <kgrittn@ymail.com>)
List pgsql-hackers
On Fri, Sep 12, 2014 at 12:39 PM, Kevin Grittner <kgrittn@ymail.com> wrote:
> It's been a while since I read that paper, but my recollection is
> that they assumed that each process or thread looking at a buffer
> would have its own private copy of that buffer, which it could be
> sure nobody was changing (even if the "master" copy somewhere else
> was changing).  Locking was only needed to prevent conflicting
> writes.  Now, whether it is safe to assume that creating a
> process-local buffer and copying to it is cheaper than getting a
> lock seems dicey, but that seemed to be the implicit assumption.

That is one way to make reads atomic, but I don't recall any explicit
mention of it. In 1981, I think page sizes were about the same as
today, but 4K was a lot of memory. We could actually do this, with
some work. I think that this has actually been implemented elsewhere,
though. Note that L&Y have practically nothing to say about deletion -
they simply suggest that it be done offline.

It is really useful that we can recover from page splits as and when
problems arise. That's really what I'd like to prominently convey - it
is the whole point of L&Y.

-- 
Peter Geoghegan



pgsql-hackers by date:

Previous
From: Heikki Linnakangas
Date:
Subject: CRC algorithm (was Re: [REVIEW] Re: Compression of full-page-writes)
Next
From: Ants Aasma
Date:
Subject: Re: [REVIEW] Re: Compression of full-page-writes