From 9926bd4a937ccfeaa19e412064d9c46c39b42792 Mon Sep 17 00:00:00 2001 From: Peter Geoghegan Date: Mon, 1 Oct 2018 15:51:53 -0700 Subject: [PATCH v13 3/7] Consider secondary factors during nbtree splits. Teach nbtree to give some consideration to how "distinguishing" candidate leaf page split points are. This should not noticeably affect the balance of free space within each half of the split, while still making suffix truncation truncate away significantly more attributes on average. The logic for choosing a leaf split point now uses a fallback mode in the case where the page is full of duplicates and it isn't possible to find even a minimally distinguishing split point. When the page is full of duplicates, the split should pack the left half very tightly, while leaving the right half mostly empty. Our assumption is that logical duplicates will almost always be inserted in ascending heap TID order with v4 indexes. This strategy leaves most of the free space on the half of the split that will likely be where future logical duplicates of the same value need to be placed. The number of cycles added is not very noticeable. This is important because deciding on a split point takes place while at least one exclusive buffer lock is held. We avoid using authoritative insertion scankey comparisons to save cycles, unlike suffix truncation proper. We use a faster binary comparison instead. Note that even pre-pg_upgrade'd v3 indexes make use of these optimizations. Benchmarking has shown that even v3 indexes benefit, despite the fact that suffix truncation will only truncate non-key attributes in INCLUDE indexes. Grouping relatively similar tuples together is beneficial in and of itself, since it reduces the number of leaf pages that must be accessed by subsequent index scans. --- src/backend/access/nbtree/Makefile | 2 +- src/backend/access/nbtree/README | 47 +- src/backend/access/nbtree/nbtinsert.c | 299 +-------- src/backend/access/nbtree/nbtsplitloc.c | 779 ++++++++++++++++++++++++ src/backend/access/nbtree/nbtutils.c | 49 ++ src/include/access/nbtree.h | 15 +- 6 files changed, 897 insertions(+), 294 deletions(-) create mode 100644 src/backend/access/nbtree/nbtsplitloc.c diff --git a/src/backend/access/nbtree/Makefile b/src/backend/access/nbtree/Makefile index bbb21d235c..9aab9cf64a 100644 --- a/src/backend/access/nbtree/Makefile +++ b/src/backend/access/nbtree/Makefile @@ -13,6 +13,6 @@ top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global OBJS = nbtcompare.o nbtinsert.o nbtpage.o nbtree.o nbtsearch.o \ - nbtutils.o nbtsort.o nbtvalidate.o nbtxlog.o + nbtsplitloc.o nbtutils.o nbtsort.o nbtvalidate.o nbtxlog.o include $(top_srcdir)/src/backend/common.mk diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README index be9bf61d47..cdd68b6f75 100644 --- a/src/backend/access/nbtree/README +++ b/src/backend/access/nbtree/README @@ -155,9 +155,9 @@ Lehman and Yao assume fixed-size keys, but we must deal with variable-size keys. Therefore there is not a fixed maximum number of keys per page; we just stuff in as many as will fit. When we split a page, we try to equalize the number of bytes, not items, assigned to -each of the resulting pages. Note we must include the incoming item in -this calculation, otherwise it is possible to find that the incoming -item doesn't fit on the split page where it needs to go! +pages (though suffix truncation is also considered). Note we must include +the incoming item in this calculation, otherwise it is possible to find +that the incoming item doesn't fit on the split page where it needs to go! The Deletion Algorithm ---------------------- @@ -657,6 +657,47 @@ variable-length types, such as text. An opclass support function could manufacture the shortest possible key value that still correctly separates each half of a leaf page split. +There is sophisticated criteria for choosing a leaf page split point. The +general idea is to make suffix truncation effective without unduly +influencing the balance of space for each half of the page split. The +choice of leaf split point can be thought of as a choice among points +*between* items on the page to be split, at least if you pretend that the +incoming tuple was placed on the page already (you have to pretend because +there won't actually be enough space for it on the page). Choosing the +split point between two index tuples where the first non-equal attribute +appears as early as possible results in truncating away as many suffix +attributes as possible. Evenly balancing space among each half of the +split is usually the first concern, but even small adjustments in the +precise split point can allow truncation to be far more effective. + +Suffix truncation is primarily valuable because it makes pivot tuples +smaller, which delays splits of internal pages, but that isn't the only +reason why it's effective. Even truncation that doesn't make pivot tuples +smaller due to alignment still prevents pivot tuples from being more +restrictive than truly necessary in how they describe which values belong +on which pages. + +While it's not possible to correctly perform suffix truncation during +internal page splits, it's still useful to be discriminating when splitting +an internal page. The split point that implies a downlink be inserted in +the parent that's the smallest one available within an acceptable range of +the fillfactor-wise optimal split point is chosen. This idea also comes +from the Prefix B-Tree paper. This process has much in common with to what +happens at the leaf level to make suffix truncation effective. The overall +effect is that suffix truncation tends to produce smaller and less +discriminating pivot tuples, especially early in the lifetime of the index, +while biasing internal page splits makes the earlier, less discriminating +pivot tuples end up in the root page, delaying root page splits. + +Logical duplicates are given special consideration. The logic for +selecting a split point goes to great lengths to avoid having duplicates +span more than one page, and almost always manages to pick a split point +between two user-key-distinct tuples, accepting a completely lopsided split +if it must. When a page that's already full of duplicates must be split, +the fallback strategy assumes that duplicates are mostly inserted in +ascending heap TID order. The page is split in a way that leaves the left +half of the page mostly full, and the right half of the page mostly empty. + Notes About Data Representation ------------------------------- diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c index 6f1d179c67..c666698c1e 100644 --- a/src/backend/access/nbtree/nbtinsert.c +++ b/src/backend/access/nbtree/nbtinsert.c @@ -28,26 +28,6 @@ /* Minimum tree height for application of fastpath optimization */ #define BTREE_FASTPATH_MIN_LEVEL 2 -typedef struct -{ - /* context data for _bt_checksplitloc */ - Size newitemsz; /* size of new item to be inserted */ - int fillfactor; /* needed when splitting rightmost page */ - bool is_leaf; /* T if splitting a leaf page */ - bool is_rightmost; /* T if splitting a rightmost page */ - OffsetNumber newitemoff; /* where the new item is to be inserted */ - int leftspace; /* space available for items on left page */ - int rightspace; /* space available for items on right page */ - int olddataitemstotal; /* space taken by old items */ - - bool have_split; /* found a valid split? */ - - /* these fields valid only if have_split is true */ - bool newitemonleft; /* new item on left or right of best split */ - OffsetNumber firstright; /* best split point */ - int best_delta; /* best size delta so far */ -} FindSplitData; - static Buffer _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf); @@ -76,13 +56,6 @@ static Buffer _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Size newitemsz, IndexTuple newitem, bool newitemonleft); static void _bt_insert_parent(Relation rel, Buffer buf, Buffer rbuf, BTStack stack, bool is_root, bool is_only); -static OffsetNumber _bt_findsplitloc(Relation rel, Page page, - OffsetNumber newitemoff, - Size newitemsz, - bool *newitemonleft); -static void _bt_checksplitloc(FindSplitData *state, - OffsetNumber firstoldonright, bool newitemonleft, - int dataitemstoleft, Size firstoldonrightsz); static bool _bt_pgaddtup(Page page, Size itemsize, IndexTuple itup, OffsetNumber itup_off); static bool _bt_isequal(TupleDesc itupdesc, BTScanInsert itup_key, @@ -324,7 +297,9 @@ top: * Sets state in itup_key sufficient for later _bt_findinsertloc() call to * reuse most of the work of our initial binary search to find conflicting * tuples. This won't be usable if caller's tuple is determined to not belong - * on buf following scantid being filled-in. + * on buf following scantid being filled-in, but that should be very rare in + * practice, since the logic for choosing a leaf split point works hard to + * avoid splitting within a group of duplicates. * * Returns InvalidTransactionId if there is no conflict, else an xact ID * we must wait for to see if it commits a conflicting tuple. If an actual @@ -913,8 +888,7 @@ _bt_useduplicatepage(Relation rel, Relation heapRel, Buffer buf, * * This recursive procedure does the following things: * - * + if necessary, splits the target page (making sure that the - * split is equitable as far as post-insert free space goes). + * + if necessary, splits the target page. * + inserts the tuple. * + if the page was split, pops the parent stack, and finds the * right place to insert the new child pointer (by walking @@ -1009,7 +983,7 @@ _bt_insertonpg(Relation rel, /* Choose the split point */ firstright = _bt_findsplitloc(rel, page, - newitemoff, itemsz, + newitemoff, itemsz, itup, &newitemonleft); /* split the buffer into left and right halves */ @@ -1345,6 +1319,11 @@ _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf, * to go into the new right page, or possibly a truncated version if this * is a leaf page split. This might be either the existing data item at * position firstright, or the incoming tuple. + * + * Lehman and Yao use the last left item as the new high key for the left + * page (on leaf level). Despite appearances, the new high key is + * generated in a way that's consistent with their approach. See comments + * above _bt_findsplitloc for an explanation. */ leftoff = P_HIKEY; if (!newitemonleft && newitemoff == firstright) @@ -1684,264 +1663,6 @@ _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf, return rbuf; } -/* - * _bt_findsplitloc() -- find an appropriate place to split a page. - * - * The idea here is to equalize the free space that will be on each split - * page, *after accounting for the inserted tuple*. (If we fail to account - * for it, we might find ourselves with too little room on the page that - * it needs to go into!) - * - * If the page is the rightmost page on its level, we instead try to arrange - * to leave the left split page fillfactor% full. In this way, when we are - * inserting successively increasing keys (consider sequences, timestamps, - * etc) we will end up with a tree whose pages are about fillfactor% full, - * instead of the 50% full result that we'd get without this special case. - * This is the same as nbtsort.c produces for a newly-created tree. Note - * that leaf and nonleaf pages use different fillfactors. - * - * We are passed the intended insert position of the new tuple, expressed as - * the offsetnumber of the tuple it must go in front of. (This could be - * maxoff+1 if the tuple is to go at the end.) - * - * We return the index of the first existing tuple that should go on the - * righthand page, plus a boolean indicating whether the new tuple goes on - * the left or right page. The bool is necessary to disambiguate the case - * where firstright == newitemoff. - */ -static OffsetNumber -_bt_findsplitloc(Relation rel, - Page page, - OffsetNumber newitemoff, - Size newitemsz, - bool *newitemonleft) -{ - BTPageOpaque opaque; - OffsetNumber offnum; - OffsetNumber maxoff; - ItemId itemid; - FindSplitData state; - int leftspace, - rightspace, - goodenough, - olddataitemstotal, - olddataitemstoleft; - bool goodenoughfound; - - opaque = (BTPageOpaque) PageGetSpecialPointer(page); - - /* Passed-in newitemsz is MAXALIGNED but does not include line pointer */ - newitemsz += sizeof(ItemIdData); - - /* Total free space available on a btree page, after fixed overhead */ - leftspace = rightspace = - PageGetPageSize(page) - SizeOfPageHeaderData - - MAXALIGN(sizeof(BTPageOpaqueData)); - - /* The right page will have the same high key as the old page */ - if (!P_RIGHTMOST(opaque)) - { - itemid = PageGetItemId(page, P_HIKEY); - rightspace -= (int) (MAXALIGN(ItemIdGetLength(itemid)) + - sizeof(ItemIdData)); - } - - /* Count up total space in data items without actually scanning 'em */ - olddataitemstotal = rightspace - (int) PageGetExactFreeSpace(page); - - state.newitemsz = newitemsz; - state.is_leaf = P_ISLEAF(opaque); - state.is_rightmost = P_RIGHTMOST(opaque); - state.have_split = false; - if (state.is_leaf) - state.fillfactor = RelationGetFillFactor(rel, - BTREE_DEFAULT_FILLFACTOR); - else - state.fillfactor = BTREE_NONLEAF_FILLFACTOR; - state.newitemonleft = false; /* these just to keep compiler quiet */ - state.firstright = 0; - state.best_delta = 0; - state.leftspace = leftspace; - state.rightspace = rightspace; - state.olddataitemstotal = olddataitemstotal; - state.newitemoff = newitemoff; - - /* - * Finding the best possible split would require checking all the possible - * split points, because of the high-key and left-key special cases. - * That's probably more work than it's worth; instead, stop as soon as we - * find a "good-enough" split, where good-enough is defined as an - * imbalance in free space of no more than pagesize/16 (arbitrary...) This - * should let us stop near the middle on most pages, instead of plowing to - * the end. - */ - goodenough = leftspace / 16; - - /* - * Scan through the data items and calculate space usage for a split at - * each possible position. - */ - olddataitemstoleft = 0; - goodenoughfound = false; - maxoff = PageGetMaxOffsetNumber(page); - - for (offnum = P_FIRSTDATAKEY(opaque); - offnum <= maxoff; - offnum = OffsetNumberNext(offnum)) - { - Size itemsz; - - itemid = PageGetItemId(page, offnum); - itemsz = MAXALIGN(ItemIdGetLength(itemid)) + sizeof(ItemIdData); - - /* - * Will the new item go to left or right of split? - */ - if (offnum > newitemoff) - _bt_checksplitloc(&state, offnum, true, - olddataitemstoleft, itemsz); - - else if (offnum < newitemoff) - _bt_checksplitloc(&state, offnum, false, - olddataitemstoleft, itemsz); - else - { - /* need to try it both ways! */ - _bt_checksplitloc(&state, offnum, true, - olddataitemstoleft, itemsz); - - _bt_checksplitloc(&state, offnum, false, - olddataitemstoleft, itemsz); - } - - /* Abort scan once we find a good-enough choice */ - if (state.have_split && state.best_delta <= goodenough) - { - goodenoughfound = true; - break; - } - - olddataitemstoleft += itemsz; - } - - /* - * If the new item goes as the last item, check for splitting so that all - * the old items go to the left page and the new item goes to the right - * page. - */ - if (newitemoff > maxoff && !goodenoughfound) - _bt_checksplitloc(&state, newitemoff, false, olddataitemstotal, 0); - - /* - * I believe it is not possible to fail to find a feasible split, but just - * in case ... - */ - if (!state.have_split) - elog(ERROR, "could not find a feasible split point for index \"%s\"", - RelationGetRelationName(rel)); - - *newitemonleft = state.newitemonleft; - return state.firstright; -} - -/* - * Subroutine to analyze a particular possible split choice (ie, firstright - * and newitemonleft settings), and record the best split so far in *state. - * - * firstoldonright is the offset of the first item on the original page - * that goes to the right page, and firstoldonrightsz is the size of that - * tuple. firstoldonright can be > max offset, which means that all the old - * items go to the left page and only the new item goes to the right page. - * In that case, firstoldonrightsz is not used. - * - * olddataitemstoleft is the total size of all old items to the left of - * firstoldonright. - */ -static void -_bt_checksplitloc(FindSplitData *state, - OffsetNumber firstoldonright, - bool newitemonleft, - int olddataitemstoleft, - Size firstoldonrightsz) -{ - int leftfree, - rightfree; - Size firstrightitemsz; - bool newitemisfirstonright; - - /* Is the new item going to be the first item on the right page? */ - newitemisfirstonright = (firstoldonright == state->newitemoff - && !newitemonleft); - - if (newitemisfirstonright) - firstrightitemsz = state->newitemsz; - else - firstrightitemsz = firstoldonrightsz; - - /* Account for all the old tuples */ - leftfree = state->leftspace - olddataitemstoleft; - rightfree = state->rightspace - - (state->olddataitemstotal - olddataitemstoleft); - - /* - * The first item on the right page becomes the high key of the left page; - * therefore it counts against left space as well as right space. When - * index has included attributes, then those attributes of left page high - * key will be truncated leaving that page with slightly more free space. - * However, that shouldn't affect our ability to find valid split - * location, because anyway split location should exists even without high - * key truncation. - */ - leftfree -= firstrightitemsz; - - /* account for the new item */ - if (newitemonleft) - leftfree -= (int) state->newitemsz; - else - rightfree -= (int) state->newitemsz; - - /* - * If we are not on the leaf level, we will be able to discard the key - * data from the first item that winds up on the right page. - */ - if (!state->is_leaf) - rightfree += (int) firstrightitemsz - - (int) (MAXALIGN(sizeof(IndexTupleData)) + sizeof(ItemIdData)); - - /* - * If feasible split point, remember best delta. - */ - if (leftfree >= 0 && rightfree >= 0) - { - int delta; - - if (state->is_rightmost) - { - /* - * If splitting a rightmost page, try to put (100-fillfactor)% of - * free space on left page. See comments for _bt_findsplitloc. - */ - delta = (state->fillfactor * leftfree) - - ((100 - state->fillfactor) * rightfree); - } - else - { - /* Otherwise, aim for equal free space on both sides */ - delta = leftfree - rightfree; - } - - if (delta < 0) - delta = -delta; - if (!state->have_split || delta < state->best_delta) - { - state->have_split = true; - state->newitemonleft = newitemonleft; - state->firstright = firstoldonright; - state->best_delta = delta; - } - } -} - /* * _bt_insert_parent() -- Insert downlink into parent after a page split. * diff --git a/src/backend/access/nbtree/nbtsplitloc.c b/src/backend/access/nbtree/nbtsplitloc.c new file mode 100644 index 0000000000..6027b1f1a1 --- /dev/null +++ b/src/backend/access/nbtree/nbtsplitloc.c @@ -0,0 +1,779 @@ +/*------------------------------------------------------------------------- + * + * nbtsplitloc.c + * Choose split point code for Postgres btree implementation. + * + * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/nbtree/nbtsplitloc.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" + +#include "access/nbtree.h" +#include "storage/lmgr.h" + +/* limits on split interval */ +#define LEAF_SPLIT_INTERVAL 9 +#define INTERNAL_SPLIT_INTERVAL 3 + +typedef enum +{ + /* strategy for searching through materialized list of split points */ + SPLIT_DEFAULT, /* give some weight to truncation */ + SPLIT_MANY_DUPLICATES, /* find minimally distinguishing point */ + SPLIT_SINGLE_VALUE /* leave left page almost full */ +} SplitMode; + +typedef struct +{ + /* details of free space left by split */ + int curdelta; /* current leftfree/rightfree delta */ + int leftfree; /* space left on left page post-split */ + int rightfree; /* space left on right page post-split */ + + /* split point identifying fields (returned by _bt_findsplitloc) */ + OffsetNumber firstoldonright; /* first item on new right page */ + bool newitemonleft; /* new item goes on left, or right? */ + +} SplitPoint; + +typedef struct +{ + /* context data for _bt_recordsplit */ + Size newitemsz; /* size of new item to be inserted */ + bool is_leaf; /* T if splitting a leaf page */ + OffsetNumber newitemoff; /* where the new item is to be inserted */ + int leftspace; /* space available for items on left page */ + int rightspace; /* space available for items on right page */ + int olddataitemstotal; /* space taken by old items */ + + /* candidate split point data */ + int maxsplits; /* maximum number of splits */ + int nsplits; /* current number of splits */ + SplitPoint *splits; /* all candidate split points for page */ + int splitinterval; /* current range of acceptable split points */ +} FindSplitData; + +static void _bt_recordsplit(FindSplitData *state, + OffsetNumber firstoldonright, bool newitemonleft, + int olddataitemstoleft, Size firstoldonrightsz); +static void _bt_deltasortsplits(FindSplitData *state, double fillfactormult, + bool usemult); +static int _bt_splitcmp(const void *arg1, const void *arg2); +static OffsetNumber _bt_bestsplitloc(Relation rel, Page page, + FindSplitData *state, int perfectpenalty, + OffsetNumber newitemoff, IndexTuple newitem, + bool *newitemonleft); +static int _bt_perfect_penalty(Relation rel, Page page, + FindSplitData *state, OffsetNumber newitemoff, + IndexTuple newitem, SplitMode *secondmode); +static int _bt_split_penalty(Relation rel, Page page, OffsetNumber newitemoff, + IndexTuple newitem, SplitPoint *split, bool is_leaf); +static inline IndexTuple _bt_split_lastleft(Page page, SplitPoint *split, + IndexTuple newitem, OffsetNumber newitemoff); +static inline IndexTuple _bt_split_firstright(Page page, SplitPoint *split, + IndexTuple newitem, OffsetNumber newitemoff); + + +/* + * _bt_findsplitloc() -- find an appropriate place to split a page. + * + * The main goal here is to equalize the free space that will be on each + * split page, *after accounting for the inserted tuple*. (If we fail to + * account for it, we might find ourselves with too little room on the page + * that it needs to go into!) + * + * If the page is the rightmost page on its level, we instead try to arrange + * to leave the left split page fillfactor% full. In this way, when we are + * inserting successively increasing keys (consider sequences, timestamps, + * etc) we will end up with a tree whose pages are about fillfactor% full, + * instead of the 50% full result that we'd get without this special case. + * This is the same as nbtsort.c produces for a newly-created tree. Note + * that leaf and nonleaf pages use different fillfactors. + * + * We are passed the intended insert position of the new tuple, expressed as + * the offsetnumber of the tuple it must go in front of (this could be + * maxoff+1 if the tuple is to go at the end). The new tuple itself is also + * passed, since it's needed to give some weight to how effective suffix + * truncation will be. The implementation picks the split point that + * maximizes the effectiveness of suffix truncation from a small list of + * alternative candidate split points that leave each side of the split with + * about the same share of free space. Suffix truncation is secondary to + * equalizing free space, except in cases with large numbers of duplicates. + * Note that it is always assumed that caller goes on to perform truncation, + * even with pg_upgrade'd indexes where that isn't actually the case + * (!heapkeyspace indexes). See nbtree/README for more information about + * suffix truncation. + * + * We return the index of the first existing tuple that should go on the + * righthand page, plus a boolean indicating whether the new tuple goes on + * the left or right page. The bool is necessary to disambiguate the case + * where firstright == newitemoff. + * + * The high key for the left page is formed using the first item on the + * right page, which may seem to be contrary to Lehman & Yao's approach of + * using the left page's last item as its new high key on the leaf level. + * It isn't, though: suffix truncation will leave the left page's high key + * fully equal to the last item on the left page when two tuples with equal + * key values (excluding heap TID) enclose the split point. It isn't + * necessary for a new leaf high key to be equal to the last item on the + * left for the L&Y "subtree" invariant to hold. It's sufficient to make + * sure that the new leaf high key is strictly less than the first item on + * the right leaf page, and greater than the last item on the left page. + * When suffix truncation isn't possible, L&Y's exact approach to leaf + * splits is taken (actually, a tuple with all the keys from firstright but + * the heap TID from lastleft is formed, so as to not introduce a special + * case). + * + * Starting with the first right item minimizes the divergence between leaf + * and internal splits when checking if a candidate split point is legal. + * It is also inherently necessary for suffix truncation, since truncation + * is a subtractive process that specifically requires lastleft and + * firstright inputs. + */ +OffsetNumber +_bt_findsplitloc(Relation rel, + Page page, + OffsetNumber newitemoff, + Size newitemsz, + IndexTuple newitem, + bool *newitemonleft) +{ + BTPageOpaque opaque; + int leftspace, + rightspace, + olddataitemstotal, + olddataitemstoleft, + perfectpenalty, + leaffillfactor; + FindSplitData state; + ItemId itemid; + OffsetNumber offnum, + maxoff, + foundfirstright; + SplitMode secondmode; + double fillfactormult; + bool usemult; + + opaque = (BTPageOpaque) PageGetSpecialPointer(page); + maxoff = PageGetMaxOffsetNumber(page); + + /* Total free space available on a btree page, after fixed overhead */ + leftspace = rightspace = + PageGetPageSize(page) - SizeOfPageHeaderData - + MAXALIGN(sizeof(BTPageOpaqueData)); + + /* The right page will have the same high key as the old page */ + if (!P_RIGHTMOST(opaque)) + { + itemid = PageGetItemId(page, P_HIKEY); + rightspace -= (int) (MAXALIGN(ItemIdGetLength(itemid)) + + sizeof(ItemIdData)); + } + + /* Count up total space in data items before actually scanning 'em */ + olddataitemstotal = rightspace - (int) PageGetExactFreeSpace(page); + leaffillfactor = RelationGetFillFactor(rel, BTREE_DEFAULT_FILLFACTOR); + + /* Passed-in newitemsz is MAXALIGNED but does not include line pointer */ + newitemsz += sizeof(ItemIdData); + state.newitemsz = newitemsz; + state.is_leaf = P_ISLEAF(opaque); + state.leftspace = leftspace; + state.rightspace = rightspace; + state.olddataitemstotal = olddataitemstotal; + state.newitemoff = newitemoff; + + /* + * Allocate work space for candidate split points. Round up allocation to + * BLCKSZ, so that palloc() will be able to recycle block later on, when a + * temp buffer is used by _bt_split(). The work space would almost be a + * full BLCKSZ even without this optimization. + * + * maxsplits should never exceed maxoff because there will be at most as + * many candidate split points as there are points _between_ tuples, once + * you imagine that the new item is already on the original page (the + * final number of splits may be slightly lower because not all splits + * will be legal). Even still, add space for an extra two splits out of + * sheer paranoia. + */ + state.maxsplits = maxoff + 2; + state.splits = palloc(Max(BLCKSZ, sizeof(SplitPoint) * state.maxsplits)); + state.nsplits = 0; + + /* + * Scan through the data items and calculate space usage for a split at + * each possible position. We start at the first data offset rather than + * the second data offset to handle the "newitemoff == first data offset" + * case (otherwise, a split whose firstoldonright is the first data offset + * can't be legal, and won't actually end up being recorded by + * _bt_recordsplit). + * + * Still, it's typical for almost all calls to _bt_recordsplit to + * determine that the split is legal, and therefore enter it into the + * candidate split point array for later consideration. + */ + olddataitemstoleft = 0; + + for (offnum = P_FIRSTDATAKEY(opaque); + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + Size itemsz; + + itemid = PageGetItemId(page, offnum); + itemsz = MAXALIGN(ItemIdGetLength(itemid)) + sizeof(ItemIdData); + + /* + * Will the new item go to left or right of split? + */ + if (offnum > newitemoff) + _bt_recordsplit(&state, offnum, true, olddataitemstoleft, itemsz); + else if (offnum < newitemoff) + _bt_recordsplit(&state, offnum, false, olddataitemstoleft, itemsz); + else + { + /* may need to record a split on one or both sides of new item */ + _bt_recordsplit(&state, offnum, true, olddataitemstoleft, itemsz); + _bt_recordsplit(&state, offnum, false, olddataitemstoleft, itemsz); + } + + olddataitemstoleft += itemsz; + } + + /* + * If the new item goes as the last item, record the split point that + * leaves all the old items on the left page, and the new item on the + * right page. This is required because a split that leaves the new item + * as the firstoldonright won't have been reached within the loop. We + * always record every possible split point. + */ + Assert(olddataitemstoleft == olddataitemstotal); + if (newitemoff > maxoff) + _bt_recordsplit(&state, newitemoff, false, olddataitemstotal, 0); + + /* + * I believe it is not possible to fail to find a feasible split, but just + * in case ... + */ + if (state.nsplits == 0) + elog(ERROR, "could not find a feasible split point for index \"%s\"", + RelationGetRelationName(rel)); + + /* + * Start search for a split point among list of legal split points. Give + * primary consideration to equalizing available free space in each half + * of the split initially (start with default mode), while applying + * rightmost optimization where appropriate. Either of the two other + * fallback modes may be required for cases with a large number of + * duplicates around the original/space-optimal split point. + * + * Default mode gives some weight to suffix truncation in deciding a split + * point on leaf pages. It attempts to select a split point where a + * distinguishing attribute appears earlier in the new high key for the + * left side of the split, in order to maximize the number of trailing + * attributes that can be truncated away. Only candidate split points + * that imply an acceptable balance of free space on each side are + * considered. + */ + if (!state.is_leaf) + { + /* fillfactormult only used on rightmost page */ + usemult = P_RIGHTMOST(opaque); + fillfactormult = BTREE_NONLEAF_FILLFACTOR / 100.0; + } + else if (P_RIGHTMOST(opaque)) + { + /* Rightmost leaf page -- fillfactormult always used */ + usemult = true; + fillfactormult = leaffillfactor / 100.0; + } + else + { + /* Other leaf page. 50:50 page split. */ + usemult = false; + fillfactormult = 0.50; + } + + /* Give split points a delta, based on fillfactormult, and sort */ + _bt_deltasortsplits(&state, fillfactormult, usemult); + + /* + * Set an initial limit on the split interval/number of candidate split + * points as appropriate. The "Prefix B-Trees" paper refers to this as + * sigma l for leaf splits and sigma b for internal ("branch") splits. + * It's hard to provide a theoretical justification for the initial size + * of the split interval, though it's clear that a small split interval + * makes suffix truncation much more effective without noticeably + * affecting space utilization over time. + */ + if (!state.is_leaf) + state.splitinterval = INTERNAL_SPLIT_INTERVAL; + else + state.splitinterval = Min(Max(3, maxoff * 0.05), LEAF_SPLIT_INTERVAL); + + /* + * Find lowest possible penalty among split points currently regarded as + * acceptable -- the "perfect" penalty. The perfect penalty often saves + * _bt_bestsplitloc() additional work around calculating penalties. This + * is also a convenient point to determine if default mode worked out, or + * if we should instead reassess which split points should be considered + * acceptable (split interval, and possibly fillfactormult). + */ + perfectpenalty = _bt_perfect_penalty(rel, page, &state, newitemoff, + newitem, &secondmode); + + /* + * Reconsider strategy when call to _bt_perfect_penalty() tells us to, + * using the second mode it indicated. We do all we can to avoid having + * to append a heap TID in the new high key when default mode and its + * initial fillfactormult won't be able to avoid that, including enlarging + * split interval to consider all possible split points. + * + * Many duplicates mode may be used when a heap TID would otherwise be + * appended, but the page isn't completely full of logical duplicates + * (there may be a few as two distinct values). The split interval is + * widened to include all possible candidate split points. Many + * duplicates mode has no hard requirements for space utilization, though + * it still keeps the use of space balanced as a non-binding secondary + * goal (perfect penalty is set so that the first/lowest delta split + * points that's minimally distinguishing is used). + * + * Single value mode is used when it is impossible to avoid appending a + * heap TID. It arranges to leave the left page very full. This + * maximizes space utilization in cases where tuples with the same + * attribute values span many pages. Newly inserted duplicates will tend + * to have higher heap TID values, so we'll end up splitting to the right + * consistently. (Single value mode is harmless though not particularly + * useful with !heapkeyspace indexes.) + */ + if (secondmode == SPLIT_MANY_DUPLICATES) + { + Assert(state.is_leaf); + /* No need to resort splits -- no change in fillfactormult/deltas */ + state.splitinterval = state.nsplits; + /* Settle for lowest delta split that avoids appending heap TID */ + perfectpenalty = IndexRelationGetNumberOfKeyAttributes(rel); + } + else if (secondmode == SPLIT_SINGLE_VALUE) + { + Assert(state.is_leaf); + /* Split towards the end of the page */ + usemult = true; + fillfactormult = BTREE_SINGLEVAL_FILLFACTOR / 100.0; + /* Resort split points with new delta */ + _bt_deltasortsplits(&state, fillfactormult, usemult); + state.splitinterval = 1; + /* Accept that appending a heap TID is inevitable */ + perfectpenalty = IndexRelationGetNumberOfKeyAttributes(rel) + 1; + } + else + { + /* Common case: default mode worked out, or internal page */ + Assert(secondmode == SPLIT_DEFAULT); + /* Original perfectpenalty still stands */ + } + + /* + * Search among acceptable split points (the first splitinterval points) + * for the entry that has the lowest penalty, and is therefore expected to + * maximize fan-out. Sets *newitemonleft for us. + */ + foundfirstright = _bt_bestsplitloc(rel, page, &state, perfectpenalty, + newitemoff, newitem, newitemonleft); + pfree(state.splits); + + return foundfirstright; +} + +/* + * Subroutine to record a particular point between two tuples (possibly the + * new item) on page (ie, combination of firstright and newitemonleft + * settings) in *state for later analysis. This is also a convenient point + * to check if the split is legal (if it isn't, it won't be recorded). + * + * firstoldonright is the offset of the first item on the original page that + * goes to the right page, and firstoldonrightsz is the size of that tuple. + * firstoldonright can be > max offset, which means that all the old items go + * to the left page and only the new item goes to the right page. In that + * case, firstoldonrightsz is not used. + * + * olddataitemstoleft is the total size of all old items to the left of the + * split point that is recorded here when legal. Should not include + * newitemsz, since that is handled here. + */ +static void +_bt_recordsplit(FindSplitData *state, + OffsetNumber firstoldonright, + bool newitemonleft, + int olddataitemstoleft, + Size firstoldonrightsz) +{ + int leftfree, + rightfree; + Size firstrightitemsz; + bool newitemisfirstonright; + + /* Is the new item going to be the first item on the right page? */ + newitemisfirstonright = (firstoldonright == state->newitemoff + && !newitemonleft); + + if (newitemisfirstonright) + firstrightitemsz = state->newitemsz; + else + firstrightitemsz = firstoldonrightsz; + + /* Account for all the old tuples */ + leftfree = state->leftspace - olddataitemstoleft; + rightfree = state->rightspace - + (state->olddataitemstotal - olddataitemstoleft); + + /* + * The first item on the right page becomes the high key of the left page; + * therefore it counts against left space as well as right space (we + * cannot assume that suffix truncation will make it any smaller). When + * index has included attributes, then those attributes of left page high + * key will be truncated leaving that page with slightly more free space. + * However, that shouldn't affect our ability to find valid split + * location, since we err in the direction of being pessimistic about free + * space on the left half. Besides, even when suffix truncation of + * non-TID attributes occurs, the new high key often won't even be a + * single MAXALIGN() quantum smaller than the firstright tuple it's based + * on. + * + * If we are on the leaf level, assume that suffix truncation cannot avoid + * adding a heap TID to the left half's new high key when splitting at the + * leaf level. In practice the new high key will often be smaller and + * will rarely be larger, but conservatively assume the worst case. + */ + if (state->is_leaf) + leftfree -= (int) (firstrightitemsz + + MAXALIGN(sizeof(ItemPointerData))); + else + leftfree -= (int) firstrightitemsz; + + /* account for the new item */ + if (newitemonleft) + leftfree -= (int) state->newitemsz; + else + rightfree -= (int) state->newitemsz; + + /* + * If we are not on the leaf level, we will be able to discard the key + * data from the first item that winds up on the right page. + */ + if (!state->is_leaf) + rightfree += (int) firstrightitemsz - + (int) (MAXALIGN(sizeof(IndexTupleData)) + sizeof(ItemIdData)); + + /* Record split if legal */ + if (leftfree >= 0 && rightfree >= 0) + { + /* 2 extra items in maxsplits shouldn't be necessary */ + Assert(state->nsplits < state->maxsplits - 2); + + state->splits[state->nsplits].curdelta = 0; + state->splits[state->nsplits].leftfree = leftfree; + state->splits[state->nsplits].rightfree = rightfree; + state->splits[state->nsplits].firstoldonright = firstoldonright; + state->splits[state->nsplits].newitemonleft = newitemonleft; + state->nsplits++; + } +} + +/* + * Subroutine to assign space deltas to materialized array of candidate split + * points based on current fillfactor, and to sort array using that fillfactor + */ +static void +_bt_deltasortsplits(FindSplitData *state, double fillfactormult, + bool usemult) +{ + for (int i = 0; i < state->nsplits; i++) + { + SplitPoint *split = state->splits + i; + int delta; + + if (usemult) + delta = fillfactormult * split->leftfree - + (1.0 - fillfactormult) * split->rightfree; + else + delta = split->leftfree - split->rightfree; + + if (delta < 0) + delta = -delta; + + /* Save delta */ + split->curdelta = delta; + } + + qsort(state->splits, state->nsplits, sizeof(SplitPoint), _bt_splitcmp); +} + +/* + * qsort-style comparator used by _bt_deltasortsplits() + */ +static int +_bt_splitcmp(const void *arg1, const void *arg2) +{ + SplitPoint *split1 = (SplitPoint *) arg1; + SplitPoint *split2 = (SplitPoint *) arg2; + + if (split1->curdelta > split2->curdelta) + return 1; + if (split1->curdelta < split2->curdelta) + return -1; + + return 0; +} + +/* + * Subroutine to find the "best" split point among an array of acceptable + * candidate split points that split without there being an excessively high + * delta between the space left free on the left and right halves. The "best" + * split point is the split point with the lowest penalty, which is an + * abstract idea whose definition varies depending on whether we're splitting + * at the leaf level, or an internal level. See _bt_split_penalty() for the + * definition. + * + * "perfectpenalty" is assumed to be the lowest possible penalty among + * candidate split points. This allows us to return early without wasting + * cycles on calculating the first differing attribute for all candidate + * splits when that clearly cannot improve our choice. This optimization is + * important for several common cases, including insertion into a primary key + * index on an auto-incremented or monotonically increasing integer column. + * + * We return the index of the first existing tuple that should go on the + * righthand page, plus a boolean indicating if new item is on left of split + * point. + */ +static OffsetNumber +_bt_bestsplitloc(Relation rel, + Page page, + FindSplitData *state, + int perfectpenalty, + OffsetNumber newitemoff, + IndexTuple newitem, + bool *newitemonleft) +{ + int bestpenalty, + lowsplit; + int highsplit = Min(state->splitinterval, state->nsplits); + + /* + * No point in calculating penalty when there's only one choice. Note + * that single value mode always has one choice. + */ + if (state->nsplits == 1) + { + *newitemonleft = state->splits[0].newitemonleft; + return state->splits[0].firstoldonright; + } + + bestpenalty = INT_MAX; + lowsplit = 0; + for (int i = lowsplit; i < highsplit; i++) + { + int penalty; + + penalty = _bt_split_penalty(rel, page, newitemoff, newitem, + state->splits + i, state->is_leaf); + + if (penalty <= perfectpenalty) + { + bestpenalty = penalty; + lowsplit = i; + break; + } + + if (penalty < bestpenalty) + { + bestpenalty = penalty; + lowsplit = i; + } + } + + *newitemonleft = state->splits[lowsplit].newitemonleft; + return state->splits[lowsplit].firstoldonright; +} + +/* + * Subroutine to find the lowest possible penalty for any acceptable candidate + * split point when still in default mode. This may be lower than any real + * penalty for any of the candidate split points, in which case the + * optimization is ineffective. Split penalties are discrete rather than + * continuous, so an actually-obtainable penalty is common. + * + * This is also a convenient point to decide to either finish splitting the + * page using default mode, or, alternatively, to consider alternative modes. + * (This can only happen with leaf pages.) + */ +static int +_bt_perfect_penalty(Relation rel, Page page, FindSplitData *state, + OffsetNumber newitemoff, IndexTuple newitem, + SplitMode *secondmode) +{ + ItemId itemid; + IndexTuple leftmost, + rightmost; + SplitPoint *low, + *high; + int perfectpenalty; + int indnkeyatts = IndexRelationGetNumberOfKeyAttributes(rel); + + /* Assume that alternative-mode split won't be required for now */ + *secondmode = SPLIT_DEFAULT; + + /* + * There are a much smaller number of candidate split points when + * splitting an internal page, so we can afford to be exhaustive. Only + * give up when pivot that will be inserted into parent is as small as + * possible. + */ + if (!state->is_leaf) + return MAXALIGN(sizeof(IndexTupleData) + 1); + + /* + * Use leftmost and rightmost tuples within current acceptable range of + * split points (using current split interval). + */ + low = state->splits; + high = state->splits + (Min(state->splitinterval, state->nsplits) - 1); + leftmost = _bt_split_lastleft(page, low, newitem, newitemoff); + rightmost = _bt_split_firstright(page, high, newitem, newitemoff); + perfectpenalty = _bt_keep_natts_fast(rel, leftmost, rightmost); + + /* + * Work out which type of second pass caller should perform, if any, when + * even their "perfect" penalty fails to avoid appending a heap TID to new + * pivot tuple + */ + if (perfectpenalty > indnkeyatts) + { + BTPageOpaque opaque; + OffsetNumber maxoff; + int origpagepenalty; + + opaque = (BTPageOpaque) PageGetSpecialPointer(page); + maxoff = PageGetMaxOffsetNumber(page); + + /* + * If page has many duplicates but is not entirely full of duplicates, + * a many duplicates mode split will be performed. If page is + * entirely full of duplicates and it appears that the duplicates have + * been inserted in sequential order (i.e. heap TID order), a single + * value mode split will be performed. + * + * Deliberately ignore new item here, since a split that leaves only + * one item on either page is often deemed unworkable by + * _bt_recordsplit(). + */ + itemid = PageGetItemId(page, P_FIRSTDATAKEY(opaque)); + leftmost = (IndexTuple) PageGetItem(page, itemid); + itemid = PageGetItemId(page, maxoff); + rightmost = (IndexTuple) PageGetItem(page, itemid); + origpagepenalty = _bt_keep_natts_fast(rel, leftmost, rightmost); + + if (origpagepenalty <= indnkeyatts) + *secondmode = SPLIT_MANY_DUPLICATES; + else if (P_RIGHTMOST(opaque)) + *secondmode = SPLIT_SINGLE_VALUE; + else + { + itemid = PageGetItemId(page, P_HIKEY); + if (ItemIdGetLength(itemid) != + IndexTupleSize(newitem) + MAXALIGN(sizeof(ItemPointerData))) + *secondmode = SPLIT_SINGLE_VALUE; + else + { + IndexTuple hikey; + + hikey = (IndexTuple) PageGetItem(page, itemid); + origpagepenalty = _bt_keep_natts_fast(rel, hikey, newitem); + if (origpagepenalty <= indnkeyatts) + *secondmode = SPLIT_SINGLE_VALUE; + } + } + + /* + * Have caller finish split using default mode when new item does not + * appear to be a duplicate that's inserted into the rightmost page + * that duplicates can be found on (found by a scan that omits + * scantid). Evenly sharing space among each half of the split avoids + * pathological performance. + * + * Note that single value mode should generally still be used when + * duplicate insertions have heap TIDs that are slightly out of order. + * That's probably due to concurrency. + */ + } + + return perfectpenalty; +} + +/* + * Subroutine to find penalty for caller's candidate split point. + * + * On leaf pages, penalty is the attribute number that distinguishes each side + * of a split. It's the last attribute that needs to be included in new high + * key for left page. It can be greater than the number of key attributes in + * cases where a heap TID will need to be appended during truncation. + * + * On internal pages, penalty is simply the size of the first item on the + * right half of the split (excluding ItemId overhead) which becomes the new + * high key for the left page. + */ +static int +_bt_split_penalty(Relation rel, Page page, OffsetNumber newitemoff, + IndexTuple newitem, SplitPoint *split, bool is_leaf) +{ + IndexTuple lastleftuple; + IndexTuple firstrighttuple; + + firstrighttuple = _bt_split_firstright(page, split, newitem, newitemoff); + + if (!is_leaf) + return IndexTupleSize(firstrighttuple); + + lastleftuple = _bt_split_lastleft(page, split, newitem, newitemoff); + + Assert(lastleftuple != firstrighttuple); + return _bt_keep_natts_fast(rel, lastleftuple, firstrighttuple); +} + +/* + * Subroutine to get a lastleft IndexTuple for a spit point from page + */ +static inline IndexTuple +_bt_split_lastleft(Page page, SplitPoint *split, IndexTuple newitem, + OffsetNumber newitemoff) +{ + ItemId itemid; + + if (split->newitemonleft && split->firstoldonright == newitemoff) + return newitem; + + itemid = PageGetItemId(page, OffsetNumberPrev(split->firstoldonright)); + return (IndexTuple) PageGetItem(page, itemid); +} + +/* + * Subroutine to get a firstright IndexTuple for a spit point from page + */ +static inline IndexTuple +_bt_split_firstright(Page page, SplitPoint *split, IndexTuple newitem, + OffsetNumber newitemoff) +{ + ItemId itemid; + + if (!split->newitemonleft && split->firstoldonright == newitemoff) + return newitem; + + itemid = PageGetItemId(page, split->firstoldonright); + return (IndexTuple) PageGetItem(page, itemid); +} diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c index 15090b26d2..146de1b2e4 100644 --- a/src/backend/access/nbtree/nbtutils.c +++ b/src/backend/access/nbtree/nbtutils.c @@ -22,6 +22,7 @@ #include "access/relscan.h" #include "miscadmin.h" #include "utils/array.h" +#include "utils/datum.h" #include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/rel.h" @@ -2318,6 +2319,54 @@ _bt_keep_natts(Relation rel, IndexTuple lastleft, IndexTuple firstright, return keepnatts; } +/* + * _bt_keep_natts_fast - fast, approximate variant of _bt_keep_natts. + * + * This is exported so that a candidate split point can have its effect on + * suffix truncation inexpensively evaluated ahead of time when finding a + * split location. A naive bitwise approach to datum comparisons is used to + * save cycles. This is inherently approximate, but usually provides the same + * answer as the authoritative approach that _bt_keep_natts takes, since the + * vast majority of types in Postgres cannot be equal according to any + * available opclass unless they're bitwise equal. + * + * This can return a number of attributes that is one greater than the + * number of key attributes for the index relation. This indicates that the + * caller must use a heap TID as a unique-ifier in new pivot tuple. + */ +int +_bt_keep_natts_fast(Relation rel, IndexTuple lastleft, IndexTuple firstright) +{ + TupleDesc itupdesc = RelationGetDescr(rel); + int keysz = IndexRelationGetNumberOfKeyAttributes(rel); + int keepnatts; + + keepnatts = 1; + for (int attnum = 1; attnum <= keysz; attnum++) + { + Datum datum1, + datum2; + bool isNull1, + isNull2; + Form_pg_attribute att; + + datum1 = index_getattr(lastleft, attnum, itupdesc, &isNull1); + datum2 = index_getattr(firstright, attnum, itupdesc, &isNull2); + att = TupleDescAttr(itupdesc, attnum - 1); + + if (isNull1 != isNull2) + break; + + if (!isNull1 && + !datumIsEqual(datum1, datum2, att->attbyval, att->attlen)) + break; + + keepnatts++; + } + + return keepnatts; +} + /* * _bt_check_natts() -- Verify tuple has expected number of attributes. * diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index c24b9a7c37..31815c63fe 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -168,11 +168,15 @@ typedef struct BTMetaPageData * For pages above the leaf level, we use a fixed 70% fillfactor. * The fillfactor is applied during index build and when splitting * a rightmost page; when splitting non-rightmost pages we try to - * divide the data equally. + * divide the data equally. When splitting a page that's entirely + * filled with a single value (duplicates), the effective leaf-page + * fillfactor is 96%, regardless of whether the page is a rightmost + * page. */ #define BTREE_MIN_FILLFACTOR 10 #define BTREE_DEFAULT_FILLFACTOR 90 #define BTREE_NONLEAF_FILLFACTOR 70 +#define BTREE_SINGLEVAL_FILLFACTOR 96 /* * In general, the btree code tries to localize its knowledge about @@ -681,6 +685,13 @@ extern bool _bt_doinsert(Relation rel, IndexTuple itup, extern Buffer _bt_getstackbuf(Relation rel, BTStack stack); extern void _bt_finish_split(Relation rel, Buffer bbuf, BTStack stack); +/* + * prototypes for functions in nbtsplitloc.c + */ +extern OffsetNumber _bt_findsplitloc(Relation rel, Page page, + OffsetNumber newitemoff, Size newitemsz, IndexTuple newitem, + bool *newitemonleft); + /* * prototypes for functions in nbtpage.c */ @@ -747,6 +758,8 @@ extern bool btproperty(Oid index_oid, int attno, bool *res, bool *isnull); extern IndexTuple _bt_truncate(Relation rel, IndexTuple lastleft, IndexTuple firstright, BTScanInsert itup_key); +extern int _bt_keep_natts_fast(Relation rel, IndexTuple lastleft, + IndexTuple firstright); extern bool _bt_check_natts(Relation rel, bool heapkeyspace, Page page, OffsetNumber offnum); extern void _bt_check_third_page(Relation rel, Relation heap, -- 2.17.1