From a2ec7b69d41012f191c374de2c20639c99d8a00f Mon Sep 17 00:00:00 2001 From: Peter Geoghegan Date: Mon, 1 Oct 2018 15:51:53 -0700 Subject: [PATCH v12 3/7] Pick nbtree split points discerningly. Add infrastructure to weigh how effective suffix truncation will be when choosing a split point. This should not noticeably affect the balance of free space within each half of the split, while still making suffix truncation truncate away significantly more attributes on average. The logic for choosing a split point is also taught to care about the case where there are many duplicates, making it hard to find a distinguishing split point. It may even conclude that the page being split is already full of logical duplicates, in which case it packs the left half very tightly, while leaving the right half mostly empty. Our assumption is that logical duplicates will almost always be inserted in ascending heap TID order. This strategy leaves most of the free space on the half of the split that will likely be where future logical duplicates of the same value need to be placed. The number of cycles added is not very noticeable. This is important because deciding on a split point takes place while at least one exclusive buffer lock is held. We avoid using authoritative insertion scankey comparisons to save cycles, unlike suffix truncation proper. We use a faster binary comparison instead. Note that even pre-pg_upgrade'd v3 indexes make use of these optimizations. --- src/backend/access/nbtree/Makefile | 2 +- src/backend/access/nbtree/README | 47 +- src/backend/access/nbtree/nbtinsert.c | 299 +-------- src/backend/access/nbtree/nbtsplitloc.c | 855 ++++++++++++++++++++++++ src/backend/access/nbtree/nbtutils.c | 49 ++ src/include/access/nbtree.h | 15 +- 6 files changed, 973 insertions(+), 294 deletions(-) create mode 100644 src/backend/access/nbtree/nbtsplitloc.c diff --git a/src/backend/access/nbtree/Makefile b/src/backend/access/nbtree/Makefile index bbb21d235c..9aab9cf64a 100644 --- a/src/backend/access/nbtree/Makefile +++ b/src/backend/access/nbtree/Makefile @@ -13,6 +13,6 @@ top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global OBJS = nbtcompare.o nbtinsert.o nbtpage.o nbtree.o nbtsearch.o \ - nbtutils.o nbtsort.o nbtvalidate.o nbtxlog.o + nbtsplitloc.o nbtutils.o nbtsort.o nbtvalidate.o nbtxlog.o include $(top_srcdir)/src/backend/common.mk diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README index be9bf61d47..cdd68b6f75 100644 --- a/src/backend/access/nbtree/README +++ b/src/backend/access/nbtree/README @@ -155,9 +155,9 @@ Lehman and Yao assume fixed-size keys, but we must deal with variable-size keys. Therefore there is not a fixed maximum number of keys per page; we just stuff in as many as will fit. When we split a page, we try to equalize the number of bytes, not items, assigned to -each of the resulting pages. Note we must include the incoming item in -this calculation, otherwise it is possible to find that the incoming -item doesn't fit on the split page where it needs to go! +pages (though suffix truncation is also considered). Note we must include +the incoming item in this calculation, otherwise it is possible to find +that the incoming item doesn't fit on the split page where it needs to go! The Deletion Algorithm ---------------------- @@ -657,6 +657,47 @@ variable-length types, such as text. An opclass support function could manufacture the shortest possible key value that still correctly separates each half of a leaf page split. +There is sophisticated criteria for choosing a leaf page split point. The +general idea is to make suffix truncation effective without unduly +influencing the balance of space for each half of the page split. The +choice of leaf split point can be thought of as a choice among points +*between* items on the page to be split, at least if you pretend that the +incoming tuple was placed on the page already (you have to pretend because +there won't actually be enough space for it on the page). Choosing the +split point between two index tuples where the first non-equal attribute +appears as early as possible results in truncating away as many suffix +attributes as possible. Evenly balancing space among each half of the +split is usually the first concern, but even small adjustments in the +precise split point can allow truncation to be far more effective. + +Suffix truncation is primarily valuable because it makes pivot tuples +smaller, which delays splits of internal pages, but that isn't the only +reason why it's effective. Even truncation that doesn't make pivot tuples +smaller due to alignment still prevents pivot tuples from being more +restrictive than truly necessary in how they describe which values belong +on which pages. + +While it's not possible to correctly perform suffix truncation during +internal page splits, it's still useful to be discriminating when splitting +an internal page. The split point that implies a downlink be inserted in +the parent that's the smallest one available within an acceptable range of +the fillfactor-wise optimal split point is chosen. This idea also comes +from the Prefix B-Tree paper. This process has much in common with to what +happens at the leaf level to make suffix truncation effective. The overall +effect is that suffix truncation tends to produce smaller and less +discriminating pivot tuples, especially early in the lifetime of the index, +while biasing internal page splits makes the earlier, less discriminating +pivot tuples end up in the root page, delaying root page splits. + +Logical duplicates are given special consideration. The logic for +selecting a split point goes to great lengths to avoid having duplicates +span more than one page, and almost always manages to pick a split point +between two user-key-distinct tuples, accepting a completely lopsided split +if it must. When a page that's already full of duplicates must be split, +the fallback strategy assumes that duplicates are mostly inserted in +ascending heap TID order. The page is split in a way that leaves the left +half of the page mostly full, and the right half of the page mostly empty. + Notes About Data Representation ------------------------------- diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c index 7d481f0ff2..a444619091 100644 --- a/src/backend/access/nbtree/nbtinsert.c +++ b/src/backend/access/nbtree/nbtinsert.c @@ -28,26 +28,6 @@ /* Minimum tree height for application of fastpath optimization */ #define BTREE_FASTPATH_MIN_LEVEL 2 -typedef struct -{ - /* context data for _bt_checksplitloc */ - Size newitemsz; /* size of new item to be inserted */ - int fillfactor; /* needed when splitting rightmost page */ - bool is_leaf; /* T if splitting a leaf page */ - bool is_rightmost; /* T if splitting a rightmost page */ - OffsetNumber newitemoff; /* where the new item is to be inserted */ - int leftspace; /* space available for items on left page */ - int rightspace; /* space available for items on right page */ - int olddataitemstotal; /* space taken by old items */ - - bool have_split; /* found a valid split? */ - - /* these fields valid only if have_split is true */ - bool newitemonleft; /* new item on left or right of best split */ - OffsetNumber firstright; /* best split point */ - int best_delta; /* best size delta so far */ -} FindSplitData; - static Buffer _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf); @@ -76,13 +56,6 @@ static Buffer _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Size newitemsz, IndexTuple newitem, bool newitemonleft); static void _bt_insert_parent(Relation rel, Buffer buf, Buffer rbuf, BTStack stack, bool is_root, bool is_only); -static OffsetNumber _bt_findsplitloc(Relation rel, Page page, - OffsetNumber newitemoff, - Size newitemsz, - bool *newitemonleft); -static void _bt_checksplitloc(FindSplitData *state, - OffsetNumber firstoldonright, bool newitemonleft, - int dataitemstoleft, Size firstoldonrightsz); static bool _bt_pgaddtup(Page page, Size itemsize, IndexTuple itup, OffsetNumber itup_off); static bool _bt_isequal(TupleDesc itupdesc, BTScanInsert itup_key, @@ -324,7 +297,9 @@ top: * Sets state in itup_key sufficient for later _bt_findinsertloc() call to * reuse most of the work of our initial binary search to find conflicting * tuples. This won't be usable if caller's tuple is determined to not belong - * on buf following scantid being filled-in. + * on buf following scantid being filled-in, but that should be very rare in + * practice, since the logic for choosing a leaf split point works hard to + * avoid splitting within a group of duplicates. * * Returns InvalidTransactionId if there is no conflict, else an xact ID * we must wait for to see if it commits a conflicting tuple. If an actual @@ -913,8 +888,7 @@ _bt_useduplicatepage(Relation rel, Relation heapRel, Buffer buf, * * This recursive procedure does the following things: * - * + if necessary, splits the target page (making sure that the - * split is equitable as far as post-insert free space goes). + * + if necessary, splits the target page. * + inserts the tuple. * + if the page was split, pops the parent stack, and finds the * right place to insert the new child pointer (by walking @@ -1009,7 +983,7 @@ _bt_insertonpg(Relation rel, /* Choose the split point */ firstright = _bt_findsplitloc(rel, page, - newitemoff, itemsz, + newitemoff, itemsz, itup, &newitemonleft); /* split the buffer into left and right halves */ @@ -1345,6 +1319,11 @@ _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf, * to go into the new right page, or possibly a truncated version if this * is a leaf page split. This might be either the existing data item at * position firstright, or the incoming tuple. + * + * Lehman and Yao use the last left item as the new high key for the left + * page (on leaf level). Despite appearances, the new high key is + * generated in a way that's consistent with their approach. See comments + * above _bt_findsplitloc for an explanation. */ leftoff = P_HIKEY; if (!newitemonleft && newitemoff == firstright) @@ -1684,264 +1663,6 @@ _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf, return rbuf; } -/* - * _bt_findsplitloc() -- find an appropriate place to split a page. - * - * The idea here is to equalize the free space that will be on each split - * page, *after accounting for the inserted tuple*. (If we fail to account - * for it, we might find ourselves with too little room on the page that - * it needs to go into!) - * - * If the page is the rightmost page on its level, we instead try to arrange - * to leave the left split page fillfactor% full. In this way, when we are - * inserting successively increasing keys (consider sequences, timestamps, - * etc) we will end up with a tree whose pages are about fillfactor% full, - * instead of the 50% full result that we'd get without this special case. - * This is the same as nbtsort.c produces for a newly-created tree. Note - * that leaf and nonleaf pages use different fillfactors. - * - * We are passed the intended insert position of the new tuple, expressed as - * the offsetnumber of the tuple it must go in front of. (This could be - * maxoff+1 if the tuple is to go at the end.) - * - * We return the index of the first existing tuple that should go on the - * righthand page, plus a boolean indicating whether the new tuple goes on - * the left or right page. The bool is necessary to disambiguate the case - * where firstright == newitemoff. - */ -static OffsetNumber -_bt_findsplitloc(Relation rel, - Page page, - OffsetNumber newitemoff, - Size newitemsz, - bool *newitemonleft) -{ - BTPageOpaque opaque; - OffsetNumber offnum; - OffsetNumber maxoff; - ItemId itemid; - FindSplitData state; - int leftspace, - rightspace, - goodenough, - olddataitemstotal, - olddataitemstoleft; - bool goodenoughfound; - - opaque = (BTPageOpaque) PageGetSpecialPointer(page); - - /* Passed-in newitemsz is MAXALIGNED but does not include line pointer */ - newitemsz += sizeof(ItemIdData); - - /* Total free space available on a btree page, after fixed overhead */ - leftspace = rightspace = - PageGetPageSize(page) - SizeOfPageHeaderData - - MAXALIGN(sizeof(BTPageOpaqueData)); - - /* The right page will have the same high key as the old page */ - if (!P_RIGHTMOST(opaque)) - { - itemid = PageGetItemId(page, P_HIKEY); - rightspace -= (int) (MAXALIGN(ItemIdGetLength(itemid)) + - sizeof(ItemIdData)); - } - - /* Count up total space in data items without actually scanning 'em */ - olddataitemstotal = rightspace - (int) PageGetExactFreeSpace(page); - - state.newitemsz = newitemsz; - state.is_leaf = P_ISLEAF(opaque); - state.is_rightmost = P_RIGHTMOST(opaque); - state.have_split = false; - if (state.is_leaf) - state.fillfactor = RelationGetFillFactor(rel, - BTREE_DEFAULT_FILLFACTOR); - else - state.fillfactor = BTREE_NONLEAF_FILLFACTOR; - state.newitemonleft = false; /* these just to keep compiler quiet */ - state.firstright = 0; - state.best_delta = 0; - state.leftspace = leftspace; - state.rightspace = rightspace; - state.olddataitemstotal = olddataitemstotal; - state.newitemoff = newitemoff; - - /* - * Finding the best possible split would require checking all the possible - * split points, because of the high-key and left-key special cases. - * That's probably more work than it's worth; instead, stop as soon as we - * find a "good-enough" split, where good-enough is defined as an - * imbalance in free space of no more than pagesize/16 (arbitrary...) This - * should let us stop near the middle on most pages, instead of plowing to - * the end. - */ - goodenough = leftspace / 16; - - /* - * Scan through the data items and calculate space usage for a split at - * each possible position. - */ - olddataitemstoleft = 0; - goodenoughfound = false; - maxoff = PageGetMaxOffsetNumber(page); - - for (offnum = P_FIRSTDATAKEY(opaque); - offnum <= maxoff; - offnum = OffsetNumberNext(offnum)) - { - Size itemsz; - - itemid = PageGetItemId(page, offnum); - itemsz = MAXALIGN(ItemIdGetLength(itemid)) + sizeof(ItemIdData); - - /* - * Will the new item go to left or right of split? - */ - if (offnum > newitemoff) - _bt_checksplitloc(&state, offnum, true, - olddataitemstoleft, itemsz); - - else if (offnum < newitemoff) - _bt_checksplitloc(&state, offnum, false, - olddataitemstoleft, itemsz); - else - { - /* need to try it both ways! */ - _bt_checksplitloc(&state, offnum, true, - olddataitemstoleft, itemsz); - - _bt_checksplitloc(&state, offnum, false, - olddataitemstoleft, itemsz); - } - - /* Abort scan once we find a good-enough choice */ - if (state.have_split && state.best_delta <= goodenough) - { - goodenoughfound = true; - break; - } - - olddataitemstoleft += itemsz; - } - - /* - * If the new item goes as the last item, check for splitting so that all - * the old items go to the left page and the new item goes to the right - * page. - */ - if (newitemoff > maxoff && !goodenoughfound) - _bt_checksplitloc(&state, newitemoff, false, olddataitemstotal, 0); - - /* - * I believe it is not possible to fail to find a feasible split, but just - * in case ... - */ - if (!state.have_split) - elog(ERROR, "could not find a feasible split point for index \"%s\"", - RelationGetRelationName(rel)); - - *newitemonleft = state.newitemonleft; - return state.firstright; -} - -/* - * Subroutine to analyze a particular possible split choice (ie, firstright - * and newitemonleft settings), and record the best split so far in *state. - * - * firstoldonright is the offset of the first item on the original page - * that goes to the right page, and firstoldonrightsz is the size of that - * tuple. firstoldonright can be > max offset, which means that all the old - * items go to the left page and only the new item goes to the right page. - * In that case, firstoldonrightsz is not used. - * - * olddataitemstoleft is the total size of all old items to the left of - * firstoldonright. - */ -static void -_bt_checksplitloc(FindSplitData *state, - OffsetNumber firstoldonright, - bool newitemonleft, - int olddataitemstoleft, - Size firstoldonrightsz) -{ - int leftfree, - rightfree; - Size firstrightitemsz; - bool newitemisfirstonright; - - /* Is the new item going to be the first item on the right page? */ - newitemisfirstonright = (firstoldonright == state->newitemoff - && !newitemonleft); - - if (newitemisfirstonright) - firstrightitemsz = state->newitemsz; - else - firstrightitemsz = firstoldonrightsz; - - /* Account for all the old tuples */ - leftfree = state->leftspace - olddataitemstoleft; - rightfree = state->rightspace - - (state->olddataitemstotal - olddataitemstoleft); - - /* - * The first item on the right page becomes the high key of the left page; - * therefore it counts against left space as well as right space. When - * index has included attributes, then those attributes of left page high - * key will be truncated leaving that page with slightly more free space. - * However, that shouldn't affect our ability to find valid split - * location, because anyway split location should exists even without high - * key truncation. - */ - leftfree -= firstrightitemsz; - - /* account for the new item */ - if (newitemonleft) - leftfree -= (int) state->newitemsz; - else - rightfree -= (int) state->newitemsz; - - /* - * If we are not on the leaf level, we will be able to discard the key - * data from the first item that winds up on the right page. - */ - if (!state->is_leaf) - rightfree += (int) firstrightitemsz - - (int) (MAXALIGN(sizeof(IndexTupleData)) + sizeof(ItemIdData)); - - /* - * If feasible split point, remember best delta. - */ - if (leftfree >= 0 && rightfree >= 0) - { - int delta; - - if (state->is_rightmost) - { - /* - * If splitting a rightmost page, try to put (100-fillfactor)% of - * free space on left page. See comments for _bt_findsplitloc. - */ - delta = (state->fillfactor * leftfree) - - ((100 - state->fillfactor) * rightfree); - } - else - { - /* Otherwise, aim for equal free space on both sides */ - delta = leftfree - rightfree; - } - - if (delta < 0) - delta = -delta; - if (!state->have_split || delta < state->best_delta) - { - state->have_split = true; - state->newitemonleft = newitemonleft; - state->firstright = firstoldonright; - state->best_delta = delta; - } - } -} - /* * _bt_insert_parent() -- Insert downlink into parent after a page split. * diff --git a/src/backend/access/nbtree/nbtsplitloc.c b/src/backend/access/nbtree/nbtsplitloc.c new file mode 100644 index 0000000000..015591eb87 --- /dev/null +++ b/src/backend/access/nbtree/nbtsplitloc.c @@ -0,0 +1,855 @@ +/*------------------------------------------------------------------------- + * + * nbtsplitloc.c + * Choose split point code for Postgres btree implementation. + * + * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/nbtree/nbtsplitloc.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" + +#include "access/nbtree.h" +#include "storage/lmgr.h" + +/* _bt_dofindsplitloc limits on suffix truncation split interval */ +#define MAX_LEAF_SPLIT_POINTS 9 +#define MAX_INTERNAL_SPLIT_POINTS 3 + +typedef enum +{ + /* strategy to use for a call to FindSplitData */ + SPLIT_DEFAULT, /* give some weight to truncation */ + SPLIT_MANY_DUPLICATES, /* find minimally distinguishing point */ + SPLIT_SINGLE_VALUE /* leave left page almost full */ +} SplitMode; + +typedef struct +{ + /* FindSplitData candidate split */ + int delta; /* size delta */ + bool newitemonleft; /* new item on left or right of split */ + OffsetNumber firstright; /* split point */ +} SplitPoint; + +typedef struct +{ + /* context data for _bt_checksplitloc */ + Size newitemsz; /* size of new item to be inserted */ + double propfullonleft; /* want propfullonleft * leftfree on left */ + int gooddelta; /* "good" left/right space delta cut-off */ + bool is_leaf; /* T if splitting a leaf page */ + bool is_weighted; /* T if propfullonleft used by split */ + OffsetNumber newitemoff; /* where the new item is to be inserted */ + int leftspace; /* space available for items on left page */ + int rightspace; /* space available for items on right page */ + int olddataitemstotal; /* space taken by old items */ + + int maxsplits; /* Maximum number of splits */ + int nsplits; /* Current number of splits */ + SplitPoint *splits; /* Sorted by delta */ +} FindSplitData; + +static OffsetNumber _bt_dofindsplitloc(Relation rel, Page page, + SplitMode mode, OffsetNumber newitemoff, + Size newitemsz, IndexTuple newitem, bool *newitemonleft); +static int _bt_checksplitloc(FindSplitData *state, + OffsetNumber firstoldonright, bool newitemonleft, + int dataitemstoleft, Size firstoldonrightsz); +static OffsetNumber _bt_bestsplitloc(Relation rel, Page page, + FindSplitData *state, + int perfectpenalty, + OffsetNumber newitemoff, + IndexTuple newitem, bool *newitemonleft); +static int _bt_perfect_penalty(Relation rel, Page page, SplitMode mode, + FindSplitData *state, OffsetNumber newitemoff, + IndexTuple newitem, SplitMode *secondmode); +static int _bt_split_penalty(Relation rel, Page page, OffsetNumber newitemoff, + IndexTuple newitem, SplitPoint *split, bool is_leaf); + + +/* + * _bt_findsplitloc() -- find an appropriate place to split a page. + * + * The main goal here is to equalize the free space that will be on each + * split page, *after accounting for the inserted tuple*. (If we fail to + * account for it, we might find ourselves with too little room on the page + * that it needs to go into!) + * + * We are passed the intended insert position of the new tuple, expressed as + * the offsetnumber of the tuple it must go in front of (this could be + * maxoff+1 if the tuple is to go at the end). The new tuple itself is also + * passed, since it's needed to give some weight to how effective suffix + * truncation will be. The implementation picks the split point that + * maximizes the effectiveness of suffix truncation from a small list of + * alternative candidate split points that leave each side of the split with + * about the same share of free space. Suffix truncation is secondary to + * equalizing free space, except in cases with large numbers of duplicates. + * Note that it is always assumed that caller goes on to perform truncation, + * even with pg_upgrade'd indexes where that isn't actually the case + * (!heapkeyspace indexes). + * + * We return the index of the first existing tuple that should go on the + * righthand page, plus a boolean indicating whether the new tuple goes on + * the left or right page. The bool is necessary to disambiguate the case + * where firstright == newitemoff. + * + * The high key for the left page is formed using the first item on the + * right page, which may seem to be contrary to Lehman & Yao's approach of + * using the left page's last item as its new high key on the leaf level. + * It isn't, though: suffix truncation will leave the left page's high key + * fully equal to the last item on the left page when two tuples with equal + * key values (excluding heap TID) enclose the split point. It isn't + * necessary for a new leaf high key to be equal to the last item on the + * left for the L&Y "subtree" invariant to hold. It's sufficient to make + * sure that the new leaf high key is strictly less than the first item on + * the right leaf page, and greater than the last item on the left page. + * When suffix truncation isn't possible, L&Y's exact approach to leaf + * splits is taken (actually, a tuple with all the keys from firstright but + * the heap TID from lastleft is formed, so as to not introduce a special + * case). + * + * Starting with the first right item minimizes the divergence between leaf + * and internal splits when checking if a candidate split point is legal. + * It is also inherently necessary for suffix truncation, since truncation + * is a subtractive process that specifically requires lastleft and + * firstright inputs. + */ +OffsetNumber +_bt_findsplitloc(Relation rel, + Page page, + OffsetNumber newitemoff, + Size newitemsz, + IndexTuple newitem, + bool *newitemonleft) +{ + /* Initial call always uses SPLIT_DEFAULT */ + return _bt_dofindsplitloc(rel, page, SPLIT_DEFAULT, newitemoff, newitemsz, + newitem, newitemonleft); +} + +/* + * _bt_dofindsplitloc() -- guts of find split location code. + * + * We're always initially called in default mode, which is primarily + * concerned with equalizing available free space in each half of the split. + * However, a recursive invocation of _bt_dofindsplitloc() will follow in + * cases with a large number of duplicates around the space-optimal split + * point. + * + * We give some weight to suffix truncation in deciding a split point + * on leaf pages. We try to select a point where a distinguishing attribute + * appears earlier in the new high key for the left side of the split, in + * order to maximize the number of trailing attributes that can be truncated + * away. Initially, only candidate split points that imply an acceptable + * balance of free space on each side are considered. This is even useful + * with pages that only have a single (non-TID) attribute, since it's + * helpful to avoid appending an explicit heap TID attribute to the new + * pivot tuple (high key/downlink) when it cannot actually be truncated. + * + * We do all we can to avoid having to append a heap TID in the new high + * key. We may have to call ourselves recursively in many duplicates mode. + * This happens when a heap TID would otherwise be appended, but the page + * isn't completely full of logical duplicates (there may be a few as two + * distinct values). Many duplicates mode has no hard requirements for + * space utilization, though it still keeps the use of space balanced as a + * non-binding secondary goal. This significantly improves fan-out in + * practice, at least with most affected workloads. + * + * If the page is the rightmost page on its level, we instead try to arrange + * to leave the left split page fillfactor% full. In this way, when we are + * inserting successively increasing keys (consider sequences, timestamps, + * etc) we will end up with a tree whose pages are about fillfactor% full, + * instead of the 50% full result that we'd get without this special case. + * This is the same as nbtsort.c produces for a newly-created tree. Note + * that leaf and nonleaf pages use different fillfactors. + * + * When called recursively in single value mode we try to arrange to leave + * the left split page even more full than in the fillfactor% rightmost page + * case. This maximizes space utilization in cases where tuples with the + * same attribute values span across many pages. Newly inserted duplicates + * will tend to have higher heap TID values, so we'll end up splitting to + * the right in the manner of ascending insertions of monotonically + * increasing values. See nbtree/README for more information about suffix + * truncation, and how a split point is chosen. + */ +static OffsetNumber +_bt_dofindsplitloc(Relation rel, + Page page, + SplitMode mode, + OffsetNumber newitemoff, + Size newitemsz, + IndexTuple newitem, + bool *newitemonleft) +{ + BTPageOpaque opaque; + OffsetNumber offnum; + OffsetNumber maxoff; + ItemId itemid; + FindSplitData state; + int leftspace, + rightspace, + olddataitemstotal, + olddataitemstoleft, + perfectpenalty, + leaffillfactor; + bool gooddeltafound; + SplitPoint splits[MAX_LEAF_SPLIT_POINTS]; + SplitMode secondmode; + OffsetNumber finalfirstright; + + opaque = (BTPageOpaque) PageGetSpecialPointer(page); + maxoff = PageGetMaxOffsetNumber(page); + + /* Total free space available on a btree page, after fixed overhead */ + leftspace = rightspace = + PageGetPageSize(page) - SizeOfPageHeaderData - + MAXALIGN(sizeof(BTPageOpaqueData)); + + /* The right page will have the same high key as the old page */ + if (!P_RIGHTMOST(opaque)) + { + itemid = PageGetItemId(page, P_HIKEY); + rightspace -= (int) (MAXALIGN(ItemIdGetLength(itemid)) + + sizeof(ItemIdData)); + } + + /* Count up total space in data items without actually scanning 'em */ + olddataitemstotal = rightspace - (int) PageGetExactFreeSpace(page); + leaffillfactor = RelationGetFillFactor(rel, BTREE_DEFAULT_FILLFACTOR); + + /* Passed-in newitemsz is MAXALIGNED but does not include line pointer */ + state.newitemsz = newitemsz + sizeof(ItemIdData); + state.is_leaf = P_ISLEAF(opaque); + state.leftspace = leftspace; + state.rightspace = rightspace; + state.olddataitemstotal = olddataitemstotal; + state.newitemoff = newitemoff; + state.splits = splits; + state.nsplits = 0; + if (!state.is_leaf) + { + Assert(mode == SPLIT_DEFAULT); + + /* propfullonleft only used on rightmost page */ + state.propfullonleft = BTREE_NONLEAF_FILLFACTOR / 100.0; + state.is_weighted = P_RIGHTMOST(opaque); + /* See is_leaf default mode remarks on maxsplits */ + state.maxsplits = MAX_INTERNAL_SPLIT_POINTS; + } + else if (mode == SPLIT_DEFAULT) + { + if (P_RIGHTMOST(opaque)) + { + /* + * Rightmost page splits are always weighted. Extreme contention + * on the rightmost page is relatively common, so we treat it as a + * special case. + */ + state.propfullonleft = leaffillfactor / 100.0; + state.is_weighted = true; + } + else + { + /* propfullonleft won't be used, but be tidy */ + state.propfullonleft = 0.50; + state.is_weighted = false; + } + + /* + * Set an initial limit on the split interval/number of candidate + * split points as appropriate. The "Prefix B-Trees" paper refers to + * this as sigma l for leaf splits and sigma b for internal ("branch") + * splits. It's hard to provide a theoretical justification for the + * size of the split interval, though it's clear that a small split + * interval improves space utilization. + */ + state.maxsplits = Min(Max(3, maxoff * 0.05), MAX_LEAF_SPLIT_POINTS); + } + else if (mode == SPLIT_MANY_DUPLICATES) + { + state.propfullonleft = leaffillfactor / 100.0; + state.is_weighted = P_RIGHTMOST(opaque); + state.maxsplits = maxoff + 2; + state.splits = palloc(sizeof(SplitPoint) * state.maxsplits); + } + else + { + Assert(mode == SPLIT_SINGLE_VALUE); + + state.propfullonleft = BTREE_SINGLEVAL_FILLFACTOR / 100.0; + state.is_weighted = true; + state.maxsplits = 1; + } + + /* + * Finding the best possible split would require checking all the possible + * split points, because of the high-key and left-key special cases. + * That's probably more work than it's worth outside of many duplicates + * mode; instead, stop as soon as get past the "good" splits, where good + * is defined as an imbalance in free space of no more than pagesize/16 + * (arbitrary...). This should let us stop just past the middle on most + * pages, instead of plowing to the end. Many duplicates mode must + * consider all possible choices, and so does not use this threshold for + * anything (every delta is sufficiently good to be considered by many + * duplicates mode). + * + * Note: Weighted candidate splits have weighted delta values that make + * more splits appear to be "good". A weighted search with a + * propfullonleft of 0.5 is not quite identical to unweighted case. It + * will have delta values for candidate split points that are half those + * of the corresponding candidate splits points for an unweighted search + * of the same page, and so will consider more split points before + * determining that remaining splits are no good, and falling out of the + * loop. It's very likely that a weighted split will need to go to the + * end of the page anyway, though. + */ + if (mode != SPLIT_MANY_DUPLICATES) + state.gooddelta = leftspace / 16; + else + state.gooddelta = INT_MAX; + + /* + * Scan through the data items and calculate space usage for a split at + * each possible position + */ + olddataitemstoleft = 0; + gooddeltafound = false; + + for (offnum = P_FIRSTDATAKEY(opaque); + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + Size itemsz; + int delta; + + itemid = PageGetItemId(page, offnum); + itemsz = MAXALIGN(ItemIdGetLength(itemid)) + sizeof(ItemIdData); + + /* + * Will the new item go to left or right of split? + */ + if (offnum > newitemoff) + delta = _bt_checksplitloc(&state, offnum, true, + olddataitemstoleft, itemsz); + else if (offnum < newitemoff) + delta = _bt_checksplitloc(&state, offnum, false, + olddataitemstoleft, itemsz); + else + { + /* need to try it both ways! */ + (void) _bt_checksplitloc(&state, offnum, true, + olddataitemstoleft, itemsz); + + delta = _bt_checksplitloc(&state, offnum, false, + olddataitemstoleft, itemsz); + } + + /* Record when good choice found */ + if (state.nsplits > 0 && state.splits[0].delta <= state.gooddelta) + gooddeltafound = true; + + /* + * Abort scan once we've found at least one "good" choice, provided + * we've reached the point where remaining candidates don't look good. + */ + if (gooddeltafound && delta > state.gooddelta) + break; + + olddataitemstoleft += itemsz; + } + + /* + * If the new item goes as the last item, check for splitting so that all + * the old items go to the left page and the new item goes to the right + * page. + */ + if (newitemoff > maxoff && !gooddeltafound) + _bt_checksplitloc(&state, newitemoff, false, olddataitemstotal, 0); + + /* + * I believe it is not possible to fail to find a feasible split, but just + * in case ... + */ + if (state.nsplits == 0) + elog(ERROR, "could not find a feasible split point for index \"%s\"", + RelationGetRelationName(rel)); + + /* + * Search among acceptable split points for the entry with the lowest + * penalty. See _bt_split_penalty() for the definition of penalty. The + * goal here is to choose a split point whose new high key is amenable to + * being made smaller by suffix truncation, or is already small. + * + * First find lowest possible penalty among acceptable split points -- the + * "perfect" penalty. The perfect penalty often saves _bt_bestsplitloc() + * additional work around calculating penalties. This is also a + * convenient point to determine if a second pass over page is required. + */ + perfectpenalty = _bt_perfect_penalty(rel, page, mode, &state, newitemoff, + newitem, &secondmode); + + /* Perform second pass over page when _bt_perfect_penalty() tells us to */ + if (secondmode != SPLIT_DEFAULT) + return _bt_dofindsplitloc(rel, page, secondmode, newitemoff, + newitemsz, newitem, newitemonleft); + + /* + * Search among acceptable split points for the entry that has the lowest + * penalty, and thus maximizes fan-out. Sets *newitemonleft for us. + */ + finalfirstright = _bt_bestsplitloc(rel, page, &state, perfectpenalty, + newitemoff, newitem, newitemonleft); + /* Be tidy */ + if (state.splits != splits) + pfree(state.splits); + + return finalfirstright; +} + +/* + * Subroutine to analyze a particular possible split choice (ie, firstright + * and newitemonleft settings), and record the best split so far in *state. + * + * firstoldonright is the offset of the first item on the original page + * that goes to the right page, and firstoldonrightsz is the size of that + * tuple. firstoldonright can be > max offset, which means that all the old + * items go to the left page and only the new item goes to the right page. + * In that case, firstoldonrightsz is not used. + * + * olddataitemstoleft is the total size of all old items to the left of + * firstoldonright. + * + * Returns delta between space that will be left free on left and right side + * of split. + */ +static int +_bt_checksplitloc(FindSplitData *state, + OffsetNumber firstoldonright, + bool newitemonleft, + int olddataitemstoleft, + Size firstoldonrightsz) +{ + int leftfree, + rightfree; + Size firstrightitemsz; + bool newitemisfirstonright; + + /* Is the new item going to be the first item on the right page? */ + newitemisfirstonright = (firstoldonright == state->newitemoff + && !newitemonleft); + + if (newitemisfirstonright) + firstrightitemsz = state->newitemsz; + else + firstrightitemsz = firstoldonrightsz; + + /* Account for all the old tuples */ + leftfree = state->leftspace - olddataitemstoleft; + rightfree = state->rightspace - + (state->olddataitemstotal - olddataitemstoleft); + + /* + * The first item on the right page becomes the high key of the left page; + * therefore it counts against left space as well as right space (we + * cannot assume that suffix truncation will make it any smaller). When + * index has included attributes, then those attributes of left page high + * key will be truncated leaving that page with slightly more free space. + * However, that shouldn't affect our ability to find valid split + * location, since we err in the direction of being pessimistic about free + * space on the left half. Besides, even when suffix truncation of + * non-TID attributes occurs, the new high key often won't even be a + * single MAXALIGN() quantum smaller than the firstright tuple it's based + * on. + * + * If we are on the leaf level, assume that suffix truncation cannot avoid + * adding a heap TID to the left half's new high key when splitting at the + * leaf level. In practice the new high key will often be smaller and + * will rarely be larger, but conservatively assume the worst case. + */ + if (state->is_leaf) + leftfree -= (int) (firstrightitemsz + + MAXALIGN(sizeof(ItemPointerData))); + else + leftfree -= (int) firstrightitemsz; + + /* account for the new item */ + if (newitemonleft) + leftfree -= (int) state->newitemsz; + else + rightfree -= (int) state->newitemsz; + + /* + * If we are not on the leaf level, we will be able to discard the key + * data from the first item that winds up on the right page. + */ + if (!state->is_leaf) + rightfree += (int) firstrightitemsz - + (int) (MAXALIGN(sizeof(IndexTupleData)) + sizeof(ItemIdData)); + + /* + * If feasible split point with lower delta than that of most marginal + * spit point so far, or we haven't run out of space for split points, + * remember it. + */ + if (leftfree >= 0 && rightfree >= 0) + { + int delta; + + if (state->is_weighted) + delta = state->propfullonleft * leftfree - + (1.0 - state->propfullonleft) * rightfree; + else + delta = leftfree - rightfree; + + if (delta < 0) + delta = -delta; + + /* + * Optimization: Don't recognize differences among marginal split + * points that are unlikely to end up being used anyway + */ + if (delta > state->gooddelta) + delta = state->gooddelta + 1; + if (state->nsplits < state->maxsplits || + delta < state->splits[state->nsplits - 1].delta) + { + SplitPoint newsplit; + int j; + + newsplit.delta = delta; + newsplit.newitemonleft = newitemonleft; + newsplit.firstright = firstoldonright; + + /* + * Make space at the end of the state array for new candidate + * split point if we haven't already reached the maximum number of + * split points. + */ + if (state->nsplits < state->maxsplits) + state->nsplits++; + + /* + * Replace the final item in the nsplits-wise array. The final + * item is either a garbage still-uninitialized entry, or the most + * marginal real entry when we already have as many split points + * as we're willing to consider. + */ + for (j = state->nsplits - 1; + j > 0 && state->splits[j - 1].delta > newsplit.delta; + j--) + { + state->splits[j] = state->splits[j - 1]; + } + state->splits[j] = newsplit; + } + + return delta; + } + + return INT_MAX; +} + +/* + * Subroutine to find the "best" split point among an array of acceptable + * candidate split points that split without there being an excessively high + * delta between the space left free on the left and right halves. The "best" + * split point is the split point with the lowest penalty, which is an + * abstract idea whose definition varies depending on whether we're splitting + * at the leaf level, or an internal level. See _bt_split_penalty() for the + * definition. + * + * "perfectpenalty" is assumed to be the lowest possible penalty among + * candidate split points. This allows us to return early without wasting + * cycles on calculating the first differing attribute for all candidate + * splits when that clearly cannot improve our choice. This optimization is + * important for several common cases, including insertion into a primary key + * index on an auto-incremented or monotonically increasing integer column. + * + * We return the index of the first existing tuple that should go on the + * righthand page, plus a boolean indicating if new item is on left of split + * point. + */ +static OffsetNumber +_bt_bestsplitloc(Relation rel, + Page page, + FindSplitData *state, + int perfectpenalty, + OffsetNumber newitemoff, + IndexTuple newitem, + bool *newitemonleft) +{ + int bestpenalty, + lowsplit; + + /* + * No point in calculating penalty when there's only one choice. Note + * that single value mode always has one choice. + */ + if (state->nsplits == 1) + { + *newitemonleft = state->splits[0].newitemonleft; + return state->splits[0].firstright; + } + + bestpenalty = INT_MAX; + lowsplit = 0; + for (int i = lowsplit; i < state->nsplits; i++) + { + int penalty; + + penalty = _bt_split_penalty(rel, page, newitemoff, newitem, + state->splits + i, state->is_leaf); + + if (penalty <= perfectpenalty) + { + bestpenalty = penalty; + lowsplit = i; + break; + } + + if (penalty < bestpenalty) + { + bestpenalty = penalty; + lowsplit = i; + } + } + + *newitemonleft = state->splits[lowsplit].newitemonleft; + return state->splits[lowsplit].firstright; +} + +/* + * Subroutine to find the lowest possible penalty for any acceptable candidate + * split point. This may be lower than any real penalty for any of the + * candidate split points, in which case the optimization is ineffective. + * Split penalties are discrete rather than continuous, so an + * actually-obtainable penalty is common. + * + * This is also a convenient point to decide to either finish splitting + * the page using the default strategy, or, alternatively, to do a second pass + * over page using a different strategy. (This only happens with leaf pages.) + */ +static int +_bt_perfect_penalty(Relation rel, Page page, SplitMode mode, + FindSplitData *state, OffsetNumber newitemoff, + IndexTuple newitem, SplitMode *secondmode) +{ + ItemId itemid; + OffsetNumber center; + IndexTuple leftmost, + rightmost; + int perfectpenalty; + int indnkeyatts = IndexRelationGetNumberOfKeyAttributes(rel); + + /* Assume that a second pass over page won't be required for now */ + *secondmode = SPLIT_DEFAULT; + + /* + * There are a much smaller number of candidate split points when + * splitting an internal page, so we can afford to be exhaustive. Only + * give up when pivot that will be inserted into parent is as small as + * possible. + */ + if (!state->is_leaf) + return MAXALIGN(sizeof(IndexTupleData) + 1); + + /* + * During a many duplicates pass over page, we settle for a "perfect" + * split point that merely avoids appending a heap TID in new pivot. + * Appending a heap TID is harmful enough to fan-out that it's worth + * avoiding at all costs, but it doesn't make sense to go to those lengths + * to also be able to truncate an extra, earlier attribute. + * + * Single value mode splits only occur when appending a heap TID was + * already deemed necessary. Don't waste any more cycles trying to avoid + * that outcome. + */ + if (mode == SPLIT_MANY_DUPLICATES) + return indnkeyatts; + else if (mode == SPLIT_SINGLE_VALUE) + return indnkeyatts + 1; + + /* + * Complicated though common case -- leaf page default mode split. + * + * Iterate from the end of split array to the start, in search of the + * firstright-wise leftmost and rightmost entries among acceptable split + * points. The split point with the lowest delta is at the start of the + * array. It is deemed to be the split point whose firstright offset is + * at the center. Split points with firstright offsets at both the left + * and right extremes among acceptable split points will be found at the + * end of caller's array. + */ + leftmost = NULL; + rightmost = NULL; + center = state->splits[0].firstright; + + /* + * Leaf split points can be thought of as points _between_ tuples on the + * original unsplit page image, at least if you pretend that the incoming + * tuple is already on the page to be split (imagine that the original + * unsplit page actually had enough space to fit the incoming tuple). The + * rightmost tuple is the tuple that is immediately to the right of a + * split point that is itself rightmost. Likewise, the leftmost tuple is + * the tuple to the left of the leftmost split point. + * + * When there are very few candidates, no sensible comparison can be made + * here, resulting in caller selecting lowest delta/the center split point + * by default. Typically, leftmost and rightmost tuples will be located + * almost immediately. + */ + perfectpenalty = indnkeyatts; + for (int j = state->nsplits - 1; j > 1; j--) + { + SplitPoint *split = state->splits + j; + + if (!leftmost && split->firstright <= center) + { + if (split->newitemonleft && newitemoff == split->firstright) + leftmost = newitem; + else + { + itemid = PageGetItemId(page, + OffsetNumberPrev(split->firstright)); + leftmost = (IndexTuple) PageGetItem(page, itemid); + } + } + + if (!rightmost && split->firstright >= center) + { + if (!split->newitemonleft && newitemoff == split->firstright) + rightmost = newitem; + else + { + itemid = PageGetItemId(page, split->firstright); + rightmost = (IndexTuple) PageGetItem(page, itemid); + } + } + + if (leftmost && rightmost) + { + Assert(leftmost != rightmost); + perfectpenalty = _bt_keep_natts_fast(rel, leftmost, rightmost); + break; + } + } + + /* + * Work out which type of second pass caller should perform, if any, when + * even their "perfect" penalty fails to avoid appending a heap TID to new + * pivot tuple + */ + if (perfectpenalty > indnkeyatts) + { + BTPageOpaque opaque; + OffsetNumber maxoff; + int origpagepenalty; + + opaque = (BTPageOpaque) PageGetSpecialPointer(page); + maxoff = PageGetMaxOffsetNumber(page); + + /* + * If page has many duplicates but is not entirely full of duplicates, + * a many duplicates mode pass will be performed. If page is entirely + * full of duplicates and it appears that the duplicates have been + * inserted in sequential order (i.e. heap TID order), a single value + * mode pass will be performed. + * + * Deliberately ignore new item here, since a split that leaves only + * one item on either page is often deemed unworkable by + * _bt_checksplitloc(). + */ + itemid = PageGetItemId(page, P_FIRSTDATAKEY(opaque)); + leftmost = (IndexTuple) PageGetItem(page, itemid); + itemid = PageGetItemId(page, maxoff); + rightmost = (IndexTuple) PageGetItem(page, itemid); + origpagepenalty = _bt_keep_natts_fast(rel, leftmost, rightmost); + + if (origpagepenalty <= indnkeyatts) + *secondmode = SPLIT_MANY_DUPLICATES; + else if (P_RIGHTMOST(opaque)) + *secondmode = SPLIT_SINGLE_VALUE; + else + { + itemid = PageGetItemId(page, P_HIKEY); + if (ItemIdGetLength(itemid) != + IndexTupleSize(newitem) + MAXALIGN(sizeof(ItemPointerData))) + *secondmode = SPLIT_SINGLE_VALUE; + else + { + IndexTuple hikey; + + hikey = (IndexTuple) PageGetItem(page, itemid); + origpagepenalty = _bt_keep_natts_fast(rel, hikey, newitem); + if (origpagepenalty <= indnkeyatts) + *secondmode = SPLIT_SINGLE_VALUE; + } + } + + /* + * Have caller continue with original default mode split when new item + * does not appear to be a duplicate that's inserted into the + * rightmost page that duplicates can be found on (found by a scan + * that omits scantid). Evenly sharing space among each half of the + * split avoids pathological performance. + * + * Note that single value mode should generally still be used when + * duplicate insertions have heap TIDs that are slightly out of order. + * That's probably due to concurrency. + */ + } + + return perfectpenalty; +} + +/* + * Subroutine to find penalty for caller's candidate split point. + * + * On leaf pages, penalty is the attribute number that distinguishes each side + * of a split. It's the last attribute that needs to be included in new high + * key for left page. It can be greater than the number of key attributes in + * cases where a heap TID will need to be appended during truncation. + * + * On internal pages, penalty is simply the size of the first item on the + * right half of the split (excluding ItemId overhead) which becomes the new + * high key for the left page. + */ +static int +_bt_split_penalty(Relation rel, Page page, OffsetNumber newitemoff, + IndexTuple newitem, SplitPoint *split, bool is_leaf) +{ + ItemId itemid; + IndexTuple firstright; + IndexTuple lastleft; + + if (!split->newitemonleft && newitemoff == split->firstright) + firstright = newitem; + else + { + itemid = PageGetItemId(page, split->firstright); + firstright = (IndexTuple) PageGetItem(page, itemid); + } + + if (!is_leaf) + return IndexTupleSize(firstright); + + if (split->newitemonleft && newitemoff == split->firstright) + lastleft = newitem; + else + { + OffsetNumber lastleftoff; + + lastleftoff = OffsetNumberPrev(split->firstright); + itemid = PageGetItemId(page, lastleftoff); + lastleft = (IndexTuple) PageGetItem(page, itemid); + } + + Assert(lastleft != firstright); + return _bt_keep_natts_fast(rel, lastleft, firstright); +} diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c index 15090b26d2..146de1b2e4 100644 --- a/src/backend/access/nbtree/nbtutils.c +++ b/src/backend/access/nbtree/nbtutils.c @@ -22,6 +22,7 @@ #include "access/relscan.h" #include "miscadmin.h" #include "utils/array.h" +#include "utils/datum.h" #include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/rel.h" @@ -2318,6 +2319,54 @@ _bt_keep_natts(Relation rel, IndexTuple lastleft, IndexTuple firstright, return keepnatts; } +/* + * _bt_keep_natts_fast - fast, approximate variant of _bt_keep_natts. + * + * This is exported so that a candidate split point can have its effect on + * suffix truncation inexpensively evaluated ahead of time when finding a + * split location. A naive bitwise approach to datum comparisons is used to + * save cycles. This is inherently approximate, but usually provides the same + * answer as the authoritative approach that _bt_keep_natts takes, since the + * vast majority of types in Postgres cannot be equal according to any + * available opclass unless they're bitwise equal. + * + * This can return a number of attributes that is one greater than the + * number of key attributes for the index relation. This indicates that the + * caller must use a heap TID as a unique-ifier in new pivot tuple. + */ +int +_bt_keep_natts_fast(Relation rel, IndexTuple lastleft, IndexTuple firstright) +{ + TupleDesc itupdesc = RelationGetDescr(rel); + int keysz = IndexRelationGetNumberOfKeyAttributes(rel); + int keepnatts; + + keepnatts = 1; + for (int attnum = 1; attnum <= keysz; attnum++) + { + Datum datum1, + datum2; + bool isNull1, + isNull2; + Form_pg_attribute att; + + datum1 = index_getattr(lastleft, attnum, itupdesc, &isNull1); + datum2 = index_getattr(firstright, attnum, itupdesc, &isNull2); + att = TupleDescAttr(itupdesc, attnum - 1); + + if (isNull1 != isNull2) + break; + + if (!isNull1 && + !datumIsEqual(datum1, datum2, att->attbyval, att->attlen)) + break; + + keepnatts++; + } + + return keepnatts; +} + /* * _bt_check_natts() -- Verify tuple has expected number of attributes. * diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index e7293bbaec..83298120b0 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -168,11 +168,15 @@ typedef struct BTMetaPageData * For pages above the leaf level, we use a fixed 70% fillfactor. * The fillfactor is applied during index build and when splitting * a rightmost page; when splitting non-rightmost pages we try to - * divide the data equally. + * divide the data equally. When splitting a page that's entirely + * filled with a single value (duplicates), the effective leaf-page + * fillfactor is 96%, regardless of whether the page is a rightmost + * page. */ #define BTREE_MIN_FILLFACTOR 10 #define BTREE_DEFAULT_FILLFACTOR 90 #define BTREE_NONLEAF_FILLFACTOR 70 +#define BTREE_SINGLEVAL_FILLFACTOR 96 /* * In general, the btree code tries to localize its knowledge about @@ -681,6 +685,13 @@ extern bool _bt_doinsert(Relation rel, IndexTuple itup, extern Buffer _bt_getstackbuf(Relation rel, BTStack stack, int access); extern void _bt_finish_split(Relation rel, Buffer bbuf, BTStack stack); +/* + * prototypes for functions in nbtsplitloc.c + */ +extern OffsetNumber _bt_findsplitloc(Relation rel, Page page, + OffsetNumber newitemoff, Size newitemsz, IndexTuple newitem, + bool *newitemonleft); + /* * prototypes for functions in nbtpage.c */ @@ -747,6 +758,8 @@ extern bool btproperty(Oid index_oid, int attno, bool *res, bool *isnull); extern IndexTuple _bt_truncate(Relation rel, IndexTuple lastleft, IndexTuple firstright, BTScanInsert itup_key); +extern int _bt_keep_natts_fast(Relation rel, IndexTuple lastleft, + IndexTuple firstright); extern bool _bt_check_natts(Relation rel, bool heapkeyspace, Page page, OffsetNumber offnum); extern void _bt_check_third_page(Relation rel, Relation heap, -- 2.17.1