On Mon, Aug 11, 2025 at 10:16 AM Tomas Vondra <tomas@vondra.me> wrote:
> Perhaps. For me benchmarks are a way to learn about stuff and better
> understand the pros/cons of approaches. It's possible some of the
> changes will impact the characteristics, but I doubt it can change the
> fundamental differences due to the simple approach being limited to a
> single leaf page, etc.
I think that we're all now agreed that we want to take the complex
patch's approach. ISTM that that development makes comparative
benchmarking much less interesting, at least for the time being. IMV
we should focus on cleaning up the complex patch, and on closing out
at least a few open items.
The main thing that I'm personally interested in right now,
benchmark-wise, is cases where the complex patch doesn't perform as
well as expected when we compare (say) backwards scans to forwards
scans with the complex patch. In other words, I'm mostly interested in
getting an overall sense of the performance profile of the complex
patch -- which has nothing to do with how it performs against the
master branch. I'd like to find and debug any weird performance
bugs/strange discontinuities in performance. I have a feeling that
there are at least a couple of those lurking in the complex patch
right now. Once we have some confidence that the overall performance
profile of the complex patch "makes sense", we can do more invasive
refactoring (while systematically avoiding new regressions for the
cases that were fixed).
In summary, I think that we should focus on fixing smaller open items
for now -- with an emphasis on fixing strange inconsistencies in
performance for distinct-though-similar queries (pairs of queries that
intuitively seem like they should perform very similarly, but somehow
have very different performance). I can't really justify that, but my
gut feeling is that that's the best place to focus our efforts for the
time being.
--
Peter Geoghegan