Thread: Add LSN <-> time conversion functionality

Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
Hi,

Elsewhere [1] I required a way to estimate the time corresponding to a
particular LSN in the past. I devised the attached LSNTimeline, a data
structure mapping LSNs <-> timestamps with decreasing precision for
older time, LSN pairs. This can be used to locate and translate a
particular time to LSN or vice versa using linear interpolation.

I've added an instance of the LSNTimeline to PgStat_WalStats and insert
new values to it in background writer's main loop. This patch set also
introduces some new pageinspect functions exposing LSN <-> time
translations.

Outside of being useful to users wondering about the last modification
time of a particular block in a relation, the LSNTimeline can be put to
use in other Postgres sub-systems to govern behavior based on resource
consumption -- using the LSN consumption rate as a proxy.

As mentioned in [1], the LSNTimeline is a prerequisite for my
implementation of a new freeze heuristic which seeks to freeze only
pages which will remain unmodified for a certain amount of wall clock
time. But one can imagine other uses for such translation capabilities.

The pageinspect additions need a bit more work. I didn't bump the
pageinspect version (didn't add the new functions to a new pageinspect
version file). I also didn't exercise the new pageinspect functions in a
test. I was unsure how to write a test which would be guaranteed not to
flake. Because the background writer updates the timeline, it seemed a
remote possibility that the time or LSN returned by the functions would
be 0 and as such, I'm not sure even a test that SELECT time/lsn > 0
would always pass.

I also noticed the pageinspect functions don't have XML id attributes
for link discoverability. I planned to add that in a separate commit.

- Melanie

[1] https://www.postgresql.org/message-id/CAAKRu_b3tpbdRPUPh1Q5h35gXhY%3DspH2ssNsEsJ9sDfw6%3DPEAg%40mail.gmail.com

Attachment

Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Wed, Dec 27, 2023 at 5:16 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:
>
> Elsewhere [1] I required a way to estimate the time corresponding to a
> particular LSN in the past. I devised the attached LSNTimeline, a data
> structure mapping LSNs <-> timestamps with decreasing precision for
> older time, LSN pairs. This can be used to locate and translate a
> particular time to LSN or vice versa using linear interpolation.

Attached is a new version which fixes one overflow danger I noticed in
the original patch set.

I have also been doing some thinking about the LSNTimeline data
structure. Its array elements are combined before all elements have
been used. This sacrifices precision earlier than required. I tried
some alternative structures that would use the whole array. There are
a lot of options, though. Currently each element fits twice as many
members as the preceding element. To use the whole array, we'd have to
change the behavior from filling each element to its max capacity to
something that filled elements only partially. I'm not sure what the
best distribution would be.

> I've added an instance of the LSNTimeline to PgStat_WalStats and insert
> new values to it in background writer's main loop. This patch set also
> introduces some new pageinspect functions exposing LSN <-> time
> translations.

I was thinking that maybe it is silly to have the functions allowing
for translation between LSN and time in the pageinspect extension --
since they are not specifically related to pages (pages are just an
object that has an accessible LSN). I was thinking perhaps we add them
as system information functions. However, the closest related
functions I can think of are those to get the current LSN (like
pg_current_wal_lsn ()). And those are listed as system administration
functions under backup control [1]. I don't think the LSN <-> time
functionality fits under backup control.

If I did put them in one of the system information function sections
[2], which one would work best?

- Melanie

[1] https://www.postgresql.org/docs/devel/functions-admin.html#FUNCTIONS-ADMIN-BACKUP
[2] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO

Attachment

Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
Hi,

I took a look at this today, to try to understand the purpose and how it
works. Let me share some initial thoughts and questions I have. Some of
this may be wrong/missing the point, so apologies for that.

The goal seems worthwhile in general - the way I understand it, the
patch aims to provide tracking of WAL "velocity", i.e. how much WAL was
generated over time. Which we now don't have, as we only maintain simple
cumulative stats/counters. And then uses it to estimate timestamp for a
given LSN, and vice versa, because that's what the pruning patch needs.

When I first read this, I immediately started wondering if this might
use the commit timestamp stuff we already have. Because for each commit
we already store the LSN and commit timestamp, right? But I'm not sure
that would be a good match - the commit_ts serves a very special purpose
of mapping XID => (LSN, timestamp), I don't see how to make it work for
(LSN=>timestmap) and (timestamp=>LSN) very easily.


As for the inner workings of the patch, my understanding is this:

- "LSNTimeline" consists of "LSNTime" entries representing (LSN,ts)
points, but those points are really "buckets" that grow larger and
larger for older periods of time.

- The entries are being added from bgwriter, i.e. on each loop we add
the current (LSN, timestamp) into the timeline.

- We then estimate LSN/timestamp using the data stored in LSNTimeline
(either LSN => timestamp, or the opposite direction).


Some comments in arbitrary order:

- AFAIK each entry represent an interval of time, and the next (older)
interval is twice as long, right? So the first interval is 1 second,
then 2 seconds, 4 seconds, 8 seconds, ...

- But I don't understand how the LSNTimeline entries are "aging" and get
less accurate, while the "current" bucket is short. lsntime_insert()
seems to simply move to the next entry, but doesn't that mean we insert
the entries into larger and larger buckets?

- The comments never really spell what amount of time the entries cover
/ how granular it is. My understanding is it's simply measured in number
of entries added, which is assumed to be constant and drive by
bgwriter_delay, right? Which is 200ms by default. Which seems fine, but
isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?

Is there some case where bgwriter would just loop without sleeping,
filling the timeline much faster? (I can't think of any, but ...)

- The LSNTimeline comment claims an array of size 64 is large enough to
not need to care about filling it, but maybe it should briefly explain
why we can never fill it (I guess 2^64 is just too many).

- I don't quite understand why 0005 adds the functions to pageinspect.
This has nothing to do with pages, right?

- Not sure why we need 0001. Just so that the "estimate" functions in
0002 have a convenient "start" point? Surely we could look at the
current LSNTimeline data and use the oldest value, or (if there's no
data) use the current timestamp/LSN?

- I wonder what happens if we lose the data - we know that if people
reset statistics for whatever reason (or just lose them because of a
crash, or because they're on a replica), bad things happen to
autovacuum. What's the (expected) impact on pruning?

- What about a SRF function that outputs the whole LSNTimeline? Would be
useful for debugging / development, I think. (Just a suggestion).


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
Thanks so much for reviewing!

On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
>
> When I first read this, I immediately started wondering if this might
> use the commit timestamp stuff we already have. Because for each commit
> we already store the LSN and commit timestamp, right? But I'm not sure
> that would be a good match - the commit_ts serves a very special purpose
> of mapping XID => (LSN, timestamp), I don't see how to make it work for
> (LSN=>timestmap) and (timestamp=>LSN) very easily.

I took a look at the code in commit_ts.c, and I really don't see a way
of reusing any of this commit<->timestamp infrastructure for
timestamp<->LSN mappings.

> As for the inner workings of the patch, my understanding is this:
>
> - "LSNTimeline" consists of "LSNTime" entries representing (LSN,ts)
> points, but those points are really "buckets" that grow larger and
> larger for older periods of time.

Yes, they are buckets in the sense that they represent multiple values
but each contains a single LSNTime value which is the minimum of all
the LSNTimes we "merged" into that single array element. In order to
represent a range of time, you need to use two array elements. The
linear interpolation from time <-> LSN is all done with two elements.

> - AFAIK each entry represent an interval of time, and the next (older)
> interval is twice as long, right? So the first interval is 1 second,
> then 2 seconds, 4 seconds, 8 seconds, ...
>
> - But I don't understand how the LSNTimeline entries are "aging" and get
> less accurate, while the "current" bucket is short. lsntime_insert()
> seems to simply move to the next entry, but doesn't that mean we insert
> the entries into larger and larger buckets?

Because the earlier array elements can represent fewer logical members
than later ones and because elements are merged into the next element
when space runs out, later array elements will contain older data and
more of it, so those "ranges" will be larger. But, after thinking
about it and also reading your feedback, I realized my algorithm was
silly because it starts merging logical members before it has even
used the whole array.

The attached v3 has a new algorithm. Now, LSNTimes are added from the
end of the array forward until all array elements have at least one
logical member (array length == volume). Once array length == volume,
new LSNTimes will result in merging logical members in existing
elements. We want to merge older members because those can be less
precise. So, the number of logical members per array element will
always monotonically increase starting from the beginning of the array
(which contains the most recent data) and going to the end. We want to
use all the available space in the array. That means that each LSNTime
insertion will always result in a single merge. We want the timeline
to be inclusive of the oldest data, so merging means taking the
smaller value of two LSNTime values. I had to pick a rule for choosing
which elements to merge. So, I choose the merge target as the oldest
element whose logical membership is < 2x its predecessor. I merge the
merge target's predecessor into the merge target. Then I move all of
the intervening elements down 1. Then I insert the new LSNTime at
index 0.

> - The comments never really spell what amount of time the entries cover
> / how granular it is. My understanding is it's simply measured in number
> of entries added, which is assumed to be constant and drive by
> bgwriter_delay, right? Which is 200ms by default. Which seems fine, but
> isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?
>
> Is there some case where bgwriter would just loop without sleeping,
> filling the timeline much faster? (I can't think of any, but ...)

bgwriter will wake up when there are buffers to flush, which is likely
correlated with there being new LSNs. So, actually it seems like it
might work well to rely on only filling the timeline when there are
things for bgwriter to do.

> - The LSNTimeline comment claims an array of size 64 is large enough to
> not need to care about filling it, but maybe it should briefly explain
> why we can never fill it (I guess 2^64 is just too many).

The new structure fits a different number of members. I have yet to
calculate that number, but it should be explained in the comments once
I do.

For example, if we made an LSNTimeline with volume 4, once every
element had one LSNTime and we needed to start merging, the following
is how many logical members each element would have after each of four
merges:
1111
1112
1122
1114
1124
So, if we store the number of members as an unsigned 64-bit int and we
have an LSNTimeline with volume 64, what is the maximum number of
members can we store if we hold all of the invariants described in my
algorithm above (we only merge when required, every element holds < 2x
the number of logical members as its predecessor, we do exactly one
merge every insertion [when required], membership must monotonically
increase [choose the oldest element meeting the criteria when deciding
what to merge])?

> - I don't quite understand why 0005 adds the functions to pageinspect.
> This has nothing to do with pages, right?

You're right. I just couldn't think of a good place to put the
functions. In version 3, I just put the SQL functions in pgstat_wal.c
and made them generally available (i.e. not in a contrib module). I
haven't added docs back yet. But perhaps a section near the docs
describing pg_xact_commit_timestamp() [1]? I wasn't sure if I should
put the SQL function source code in pgstatfuncs.c -- I kind of prefer
it in pgstat_wal.c but there are no other SQL functions there.

> - Not sure why we need 0001. Just so that the "estimate" functions in
> 0002 have a convenient "start" point? Surely we could look at the
> current LSNTimeline data and use the oldest value, or (if there's no
> data) use the current timestamp/LSN?

When there are 0 or 1 entries in the timeline you'll get an answer
that could be very off if you just return the current timestamp or
LSN. I guess that is okay?

> - I wonder what happens if we lose the data - we know that if people
> reset statistics for whatever reason (or just lose them because of a
> crash, or because they're on a replica), bad things happen to
> autovacuum. What's the (expected) impact on pruning?

This is an important question. Because stats aren't crashsafe, we
could return very inaccurate numbers for some time/LSN values if we
crash. I don't actually know what we could do about that. When I use
the LSNTimeline for the freeze heuristic it is less of an issue
because the freeze heuristic has a fallback strategy when there aren't
enough stats to do its calculations. But other users of this
LSNTimeline will simply get highly inaccurate results (I think?). Is
there anything we could do about this? It seems bad.

Andres had brought up something at some point about, what if the
database is simply turned off for awhile and then turned back on. Even
if you cleanly shut down, will there be "gaps" in the timeline? I
think that could be okay, but it is something to think about.

> - What about a SRF function that outputs the whole LSNTimeline? Would be
> useful for debugging / development, I think. (Just a suggestion).

Good idea! I've added this. Though, maybe there was a simpler way to
implement than I did.

Just a note, all of my comments could use a lot of work, but I want to
get consensus on the algorithm before I make sure and write about it
in a perfect way.

- Melanie

[1] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-COMMIT-TIMESTAMP

Attachment

Re: Add LSN <-> time conversion functionality

From
Daniel Gustafsson
Date:
> On 22 Feb 2024, at 03:45, Melanie Plageman <melanieplageman@gmail.com> wrote:
> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
> <tomas.vondra@enterprisedb.com> wrote:

>> - Not sure why we need 0001. Just so that the "estimate" functions in
>> 0002 have a convenient "start" point? Surely we could look at the
>> current LSNTimeline data and use the oldest value, or (if there's no
>> data) use the current timestamp/LSN?
>
> When there are 0 or 1 entries in the timeline you'll get an answer
> that could be very off if you just return the current timestamp or
> LSN. I guess that is okay?

I don't think that's a huge problem at such a young "lsn-age", but I might be
missing something.

>> - I wonder what happens if we lose the data - we know that if people
>> reset statistics for whatever reason (or just lose them because of a
>> crash, or because they're on a replica), bad things happen to
>> autovacuum. What's the (expected) impact on pruning?
>
> This is an important question. Because stats aren't crashsafe, we
> could return very inaccurate numbers for some time/LSN values if we
> crash. I don't actually know what we could do about that. When I use
> the LSNTimeline for the freeze heuristic it is less of an issue
> because the freeze heuristic has a fallback strategy when there aren't
> enough stats to do its calculations. But other users of this
> LSNTimeline will simply get highly inaccurate results (I think?). Is
> there anything we could do about this? It seems bad.

A complication with this over stats is that we can't recompute this in case of
a crash/corruption issue.  The simple solution is to consider this unlogged
data and start fresh at every unclean shutdown, but I have a feeling that won't
be good enough for basing heuristics on?

> Andres had brought up something at some point about, what if the
> database is simply turned off for awhile and then turned back on. Even
> if you cleanly shut down, will there be "gaps" in the timeline? I
> think that could be okay, but it is something to think about.

The gaps would represent reality, so there is nothing wrong per se with gaps,
but if they inflate the interval of a bucket which in turns impact the
precision of the results then that can be a problem.

> Just a note, all of my comments could use a lot of work, but I want to
> get consensus on the algorithm before I make sure and write about it
> in a perfect way.

I'm not sure "a lot of work" is accurate, they seem pretty much there to me,
but I think that an illustration of running through the algorithm in an
ascii-art array would be helpful.


Reading through this I think such a function has merits, not only for your
usecase but other heuristic based work and quite possibly systems debugging.

While the bucketing algorithm is a clever algorithm for degrading precision for
older entries without discarding them, I do fear that we'll risk ending up with
answers like "somewhere between in the past and even further in the past".
I've been playing around with various compression algorithms for packing the
buckets such that we can retain precision for longer.  Since you were aiming to
work on other patches leading up to the freeze, let's pick this up again once
things calm down.

When compiling I got this warning for lsntime_merge_target:

pgstat_wal.c:242:1: warning: non-void function does not return a value in all control paths [-Wreturn-type]
}
^
1 warning generated.

The issue seems to be that the can't-really-happen path is protected with an
Assertion, which is a no-op for production builds.  I think we should handle
the error rather than rely on testing catching it (since if it does happen even
though it can't, it's going to be when it's for sure not tested).  Returning an
invalid array subscript like -1 and testing for it in lsntime_insert, with an
elog(WARNING..), seems enough.


-    last_snapshot_lsn <= GetLastImportantRecPtr())
+    last_snapshot_lsn <= current_lsn)
I think we should delay extracting the LSN with GetLastImportantRecPtr until we
know that we need it, to avoid taking locks in this codepath unless needed.

I've attached a diff with the above suggestions which applies on op of your
patchset.

--
Daniel Gustafsson



Attachment

Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:

On 2/22/24 03:45, Melanie Plageman wrote:
> Thanks so much for reviewing!
> 
> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
> <tomas.vondra@enterprisedb.com> wrote:
>>
>> When I first read this, I immediately started wondering if this might
>> use the commit timestamp stuff we already have. Because for each commit
>> we already store the LSN and commit timestamp, right? But I'm not sure
>> that would be a good match - the commit_ts serves a very special purpose
>> of mapping XID => (LSN, timestamp), I don't see how to make it work for
>> (LSN=>timestmap) and (timestamp=>LSN) very easily.
> 
> I took a look at the code in commit_ts.c, and I really don't see a way
> of reusing any of this commit<->timestamp infrastructure for
> timestamp<->LSN mappings.
> 
>> As for the inner workings of the patch, my understanding is this:
>>
>> - "LSNTimeline" consists of "LSNTime" entries representing (LSN,ts)
>> points, but those points are really "buckets" that grow larger and
>> larger for older periods of time.
> 
> Yes, they are buckets in the sense that they represent multiple values
> but each contains a single LSNTime value which is the minimum of all
> the LSNTimes we "merged" into that single array element. In order to
> represent a range of time, you need to use two array elements. The
> linear interpolation from time <-> LSN is all done with two elements.
> 
>> - AFAIK each entry represent an interval of time, and the next (older)
>> interval is twice as long, right? So the first interval is 1 second,
>> then 2 seconds, 4 seconds, 8 seconds, ...
>>
>> - But I don't understand how the LSNTimeline entries are "aging" and get
>> less accurate, while the "current" bucket is short. lsntime_insert()
>> seems to simply move to the next entry, but doesn't that mean we insert
>> the entries into larger and larger buckets?
> 
> Because the earlier array elements can represent fewer logical members
> than later ones and because elements are merged into the next element
> when space runs out, later array elements will contain older data and
> more of it, so those "ranges" will be larger. But, after thinking
> about it and also reading your feedback, I realized my algorithm was
> silly because it starts merging logical members before it has even
> used the whole array.
> 
> The attached v3 has a new algorithm. Now, LSNTimes are added from the
> end of the array forward until all array elements have at least one
> logical member (array length == volume). Once array length == volume,
> new LSNTimes will result in merging logical members in existing
> elements. We want to merge older members because those can be less
> precise. So, the number of logical members per array element will
> always monotonically increase starting from the beginning of the array
> (which contains the most recent data) and going to the end. We want to
> use all the available space in the array. That means that each LSNTime
> insertion will always result in a single merge. We want the timeline
> to be inclusive of the oldest data, so merging means taking the
> smaller value of two LSNTime values. I had to pick a rule for choosing
> which elements to merge. So, I choose the merge target as the oldest
> element whose logical membership is < 2x its predecessor. I merge the
> merge target's predecessor into the merge target. Then I move all of
> the intervening elements down 1. Then I insert the new LSNTime at
> index 0.
> 

I can't help but think about t-digest [1], which also merges data into
variable-sized buckets (called centroids, which is a pair of values,
just like LSNTime). But the merging is driven by something called "scale
function" which I found like a pretty nice approach to this, and it
yields some guarantees regarding accuracy. I wonder if we could do
something similar here ...

The t-digest is a way to approximate quantiles, and the default scale
function is optimized for best accuracy on the extremes (close to 0.0
and 1.0), but it's possible to use scale functions that optimize only
for accuracy close to 1.0.

This may be misguided, but I see similarity between quantiles and what
LSNTimeline does - timestamps are ordered, and quantiles close to 0.0
are "old timestamps" while quantiles close to 1.0 are "now".

And t-digest also defines a pretty efficient algorithm to merge data in
a way that gradually combines older "buckets" into larger ones.

>> - The comments never really spell what amount of time the entries cover
>> / how granular it is. My understanding is it's simply measured in number
>> of entries added, which is assumed to be constant and drive by
>> bgwriter_delay, right? Which is 200ms by default. Which seems fine, but
>> isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?
>>
>> Is there some case where bgwriter would just loop without sleeping,
>> filling the timeline much faster? (I can't think of any, but ...)
> 
> bgwriter will wake up when there are buffers to flush, which is likely
> correlated with there being new LSNs. So, actually it seems like it
> might work well to rely on only filling the timeline when there are
> things for bgwriter to do.
> 
>> - The LSNTimeline comment claims an array of size 64 is large enough to
>> not need to care about filling it, but maybe it should briefly explain
>> why we can never fill it (I guess 2^64 is just too many).
> 
> The new structure fits a different number of members. I have yet to
> calculate that number, but it should be explained in the comments once
> I do.
> 
> For example, if we made an LSNTimeline with volume 4, once every
> element had one LSNTime and we needed to start merging, the following
> is how many logical members each element would have after each of four
> merges:
> 1111
> 1112
> 1122
> 1114
> 1124
> So, if we store the number of members as an unsigned 64-bit int and we
> have an LSNTimeline with volume 64, what is the maximum number of
> members can we store if we hold all of the invariants described in my
> algorithm above (we only merge when required, every element holds < 2x
> the number of logical members as its predecessor, we do exactly one
> merge every insertion [when required], membership must monotonically
> increase [choose the oldest element meeting the criteria when deciding
> what to merge])?
> 

I guess that should be enough for (2^64-1) logical members, because it's
a sequence 1, 2, 4, 8, ..., 2^63. Seems enough.

But now that I think about it, does it make sense to do the merging
based on the number of logical members? Shouldn't this really be driven
by the "length" of the time interval the member covers?

>> - I don't quite understand why 0005 adds the functions to pageinspect.
>> This has nothing to do with pages, right?
> 
> You're right. I just couldn't think of a good place to put the
> functions. In version 3, I just put the SQL functions in pgstat_wal.c
> and made them generally available (i.e. not in a contrib module). I
> haven't added docs back yet. But perhaps a section near the docs
> describing pg_xact_commit_timestamp() [1]? I wasn't sure if I should
> put the SQL function source code in pgstatfuncs.c -- I kind of prefer
> it in pgstat_wal.c but there are no other SQL functions there.
> 

OK, pgstat_wal seems like a good place.

>> - Not sure why we need 0001. Just so that the "estimate" functions in
>> 0002 have a convenient "start" point? Surely we could look at the
>> current LSNTimeline data and use the oldest value, or (if there's no
>> data) use the current timestamp/LSN?
> 
> When there are 0 or 1 entries in the timeline you'll get an answer
> that could be very off if you just return the current timestamp or
> LSN. I guess that is okay?
> 
>> - I wonder what happens if we lose the data - we know that if people
>> reset statistics for whatever reason (or just lose them because of a
>> crash, or because they're on a replica), bad things happen to
>> autovacuum. What's the (expected) impact on pruning?
> 
> This is an important question. Because stats aren't crashsafe, we
> could return very inaccurate numbers for some time/LSN values if we
> crash. I don't actually know what we could do about that. When I use
> the LSNTimeline for the freeze heuristic it is less of an issue
> because the freeze heuristic has a fallback strategy when there aren't
> enough stats to do its calculations. But other users of this
> LSNTimeline will simply get highly inaccurate results (I think?). Is
> there anything we could do about this? It seems bad.
> 
> Andres had brought up something at some point about, what if the
> database is simply turned off for awhile and then turned back on. Even
> if you cleanly shut down, will there be "gaps" in the timeline? I
> think that could be okay, but it is something to think about.
> 
>> - What about a SRF function that outputs the whole LSNTimeline? Would be
>> useful for debugging / development, I think. (Just a suggestion).
> 
> Good idea! I've added this. Though, maybe there was a simpler way to
> implement than I did.
> 

Thanks. I'll take a look.

> Just a note, all of my comments could use a lot of work, but I want to
> get consensus on the algorithm before I make sure and write about it
> in a perfect way.
> 

Makes sense, as long as the comments are sufficiently clear. It's hard
to reach consensus on something not explained clearly enough.


regards


[1]
https://github.com/tdunning/t-digest/blob/main/docs/t-digest-paper/histo.pdf

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
On 3/18/24 15:02, Daniel Gustafsson wrote:
>> On 22 Feb 2024, at 03:45, Melanie Plageman <melanieplageman@gmail.com> wrote:
>> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
>> <tomas.vondra@enterprisedb.com> wrote:
> 
>>> - Not sure why we need 0001. Just so that the "estimate" functions in
>>> 0002 have a convenient "start" point? Surely we could look at the
>>> current LSNTimeline data and use the oldest value, or (if there's no
>>> data) use the current timestamp/LSN?
>>
>> When there are 0 or 1 entries in the timeline you'll get an answer
>> that could be very off if you just return the current timestamp or
>> LSN. I guess that is okay?
> 
> I don't think that's a huge problem at such a young "lsn-age", but I might be
> missing something.
> 
>>> - I wonder what happens if we lose the data - we know that if people
>>> reset statistics for whatever reason (or just lose them because of a
>>> crash, or because they're on a replica), bad things happen to
>>> autovacuum. What's the (expected) impact on pruning?
>>
>> This is an important question. Because stats aren't crashsafe, we
>> could return very inaccurate numbers for some time/LSN values if we
>> crash. I don't actually know what we could do about that. When I use
>> the LSNTimeline for the freeze heuristic it is less of an issue
>> because the freeze heuristic has a fallback strategy when there aren't
>> enough stats to do its calculations. But other users of this
>> LSNTimeline will simply get highly inaccurate results (I think?). Is
>> there anything we could do about this? It seems bad.
> 

Do we have something to calculate a sufficiently good "average" to use
as a default, if we don't have a better value? For example, we know the
timestamp of the last checkpoint, and we know the LSN, right? Maybe if
we're sufficiently far from the checkpoint, we could use that.

Or maybe checkpoint_timeout / max_wal_size would be enough to calculate
some default value?

I wonder how long it takes until LSNTimeline gives us sufficiently good
data for all LSNs we need. That is, if we lose this, how long it takes
until we get enough data to do good decisions?

Why don't we simply WAL-log this in some trivial way? It's pretty small,
so if we WAL-log this once in a while (after a merge happens), that
should not be a problem.

Or a different idea - if we lost the data, but commit_ts is enabled,
can't we "simply" walk commit_ts and feed LSN/timestamp into the
timeline? I guess we don't want to walk 2B transactions, but even just
sampling some recent transactions might be enough, no?

> A complication with this over stats is that we can't recompute this in case of
> a crash/corruption issue.  The simple solution is to consider this unlogged
> data and start fresh at every unclean shutdown, but I have a feeling that won't
> be good enough for basing heuristics on?
> 
>> Andres had brought up something at some point about, what if the
>> database is simply turned off for awhile and then turned back on. Even
>> if you cleanly shut down, will there be "gaps" in the timeline? I
>> think that could be okay, but it is something to think about.
> 
> The gaps would represent reality, so there is nothing wrong per se with gaps,
> but if they inflate the interval of a bucket which in turns impact the
> precision of the results then that can be a problem.
> 

Well, I think the gaps are a problem in the sense that they disappear
once we start merging the buckets. But maybe that's fine, if we're only
interested in approximate data.

>> Just a note, all of my comments could use a lot of work, but I want to
>> get consensus on the algorithm before I make sure and write about it
>> in a perfect way.
> 
> I'm not sure "a lot of work" is accurate, they seem pretty much there to me,
> but I think that an illustration of running through the algorithm in an
> ascii-art array would be helpful.
> 

+1

> 
> Reading through this I think such a function has merits, not only for your
> usecase but other heuristic based work and quite possibly systems debugging.
> 
> While the bucketing algorithm is a clever algorithm for degrading precision for
> older entries without discarding them, I do fear that we'll risk ending up with
> answers like "somewhere between in the past and even further in the past".
> I've been playing around with various compression algorithms for packing the
> buckets such that we can retain precision for longer.  Since you were aiming to
> work on other patches leading up to the freeze, let's pick this up again once
> things calm down.
> 

I guess this ambiguity is pretty inherent to a structure that does not
keep all the data, and gradually reduces the resolution for old stuff.
But my understanding was that's sufficient for the freezing patch.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Add LSN <-> time conversion functionality

From
"Andrey M. Borodin"
Date:
Hi everyone!

Me, Bharath, and Ilya are on patch review session at the PGConf.dev :) Maybe we got everything wrong, please consider
thatwe are just doing training on reviewing patches. 


=== Purpose of the patch ===
Currently, we have checkpoint_timeout and max_wal size to know when we need a checkpoint. This patch brings a
capabilityto freeze page not only by internal state of the system, but also by wall clock time. 
To do so we need an infrastructure which will tell when page was modified.

The patch in this thread is doing exactly this: in-memory information to map LSNs with wall clock time. Mapping is
maintainedby bacgroundwriter. 

=== Questions ===
1. The patch does not handle server restart. All pages will need freeze after any crash?
2. Some benchmarks to proof the patch does not have CPU footprint.

=== Nits ===
"Timeline" term is already taken.
The patch needs rebase due to some header changes.
Tests fail on Windows.
The patch lacks tests.
Some docs would be nice, but the feature is for developers.
Mapping is protected for multithreaded access by walstats LWlock and might have tuplestore_putvalues() under that lock.
Thatmight be a little dangerous, if tuplestore will be on-disk for some reason (should not happen). 


Overall, the patch is a base for good feature which would help to do freeze right in time. Thanks!


Best regards, Bharath, Andrey, Ilya.


Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Mon, Mar 18, 2024 at 1:29 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
>
> On 2/22/24 03:45, Melanie Plageman wrote:
> > The attached v3 has a new algorithm. Now, LSNTimes are added from the
> > end of the array forward until all array elements have at least one
> > logical member (array length == volume). Once array length == volume,
> > new LSNTimes will result in merging logical members in existing
> > elements. We want to merge older members because those can be less
> > precise. So, the number of logical members per array element will
> > always monotonically increase starting from the beginning of the array
> > (which contains the most recent data) and going to the end. We want to
> > use all the available space in the array. That means that each LSNTime
> > insertion will always result in a single merge. We want the timeline
> > to be inclusive of the oldest data, so merging means taking the
> > smaller value of two LSNTime values. I had to pick a rule for choosing
> > which elements to merge. So, I choose the merge target as the oldest
> > element whose logical membership is < 2x its predecessor. I merge the
> > merge target's predecessor into the merge target. Then I move all of
> > the intervening elements down 1. Then I insert the new LSNTime at
> > index 0.
> >
>
> I can't help but think about t-digest [1], which also merges data into
> variable-sized buckets (called centroids, which is a pair of values,
> just like LSNTime). But the merging is driven by something called "scale
> function" which I found like a pretty nice approach to this, and it
> yields some guarantees regarding accuracy. I wonder if we could do
> something similar here ...
>
> The t-digest is a way to approximate quantiles, and the default scale
> function is optimized for best accuracy on the extremes (close to 0.0
> and 1.0), but it's possible to use scale functions that optimize only
> for accuracy close to 1.0.
>
> This may be misguided, but I see similarity between quantiles and what
> LSNTimeline does - timestamps are ordered, and quantiles close to 0.0
> are "old timestamps" while quantiles close to 1.0 are "now".
>
> And t-digest also defines a pretty efficient algorithm to merge data in
> a way that gradually combines older "buckets" into larger ones.

I started taking a look at this paper and think the t-digest could be
applicable as a possible alternative data structure to the one I am
using to approximate page age for the actual opportunistic freeze
heuristic -- especially since the centroids are pairs of a mean and a
count. I couldn't quite understand how the t-digest is combining those
centroids. Since I am not computing quantiles over the LSNTimeStream,
though, I think I can probably do something simpler for this part of
the project.

> >> - The LSNTimeline comment claims an array of size 64 is large enough to
> >> not need to care about filling it, but maybe it should briefly explain
> >> why we can never fill it (I guess 2^64 is just too many).
-- snip --
> I guess that should be enough for (2^64-1) logical members, because it's
> a sequence 1, 2, 4, 8, ..., 2^63. Seems enough.
>
> But now that I think about it, does it make sense to do the merging
> based on the number of logical members? Shouldn't this really be driven
> by the "length" of the time interval the member covers?

After reading this, I decided to experiment with a few different
algorithms in python and plot the unabbreviated LSNTimeStream against
various ways of compressing it. You can see the results if you run the
python code here [1].

What I found is that attempting to calculate the error represented by
dropping a point and picking the point which would cause the least
additional error were it to be dropped produced more accurate results
than combining the oldest entries based on logical membership to fit
some series.

This is inspired by what you said about using the length of segments
to decide which points to merge. In my case, I want to merge segments
that have a similar slope -- those which have a point that is
essentially redundant. I loop through the LSNTimeStream and look for
the point that we can drop with the lowest impact on future
interpolation accuracy. To do this, I calculate the area of the
triangle made by each point on the stream and its adjacent points. The
idea being that if you drop that point, the triangle is the amount of
error you introduce for points being interpolated between the new pair
(previous adjacencies of the dropped point). This has some issues, but
it seems more logical than just always combining the oldest points.

If you run the python simulator code, you'll see that for the
LSNTimeStream I generated, using this method produces more accurate
results than either randomly dropping points or using the "combine
oldest members" method.

It would be nice if we could do something with the accumulated error
-- so we could use it to modify estimates when interpolating. I don't
really know how to keep it though. I thought I would just save the
calculated error in one or the other of the adjacent points after
dropping a point, but then what do we do with the error saved in a
point before it is dropped? Add it to the error value in one of the
adjacent points? If we did, what would that even mean? How would we
use it?

- Melanie

[1] https://gist.github.com/melanieplageman/95126993bcb43d4b4042099e9d0ccc11



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
Thanks for the review!

Attached v4 implements the new algorithm/compression described in [1].

We had discussed off-list possibly using error in some way. So, I'm
interested to know what you think about this method I suggested which
calculates error. It doesn't save the error so that we could use it
when interpolating for reasons I describe in that mail. If you have
any ideas on how to use the calculated error or just how to combine
error when dropping a point, that would be super helpful.

Note that in this version, I've changed the name from LSNTimeline to
LSNTimeStream to address some feedback from another reviewer about
Timeline being already in use in Postgres as a concept.

On Mon, Mar 18, 2024 at 10:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:
>
> > On 22 Feb 2024, at 03:45, Melanie Plageman <melanieplageman@gmail.com> wrote:
> > On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
> > <tomas.vondra@enterprisedb.com> wrote:
> >> - I wonder what happens if we lose the data - we know that if people
> >> reset statistics for whatever reason (or just lose them because of a
> >> crash, or because they're on a replica), bad things happen to
> >> autovacuum. What's the (expected) impact on pruning?
> >
> > This is an important question. Because stats aren't crashsafe, we
> > could return very inaccurate numbers for some time/LSN values if we
> > crash. I don't actually know what we could do about that. When I use
> > the LSNTimeline for the freeze heuristic it is less of an issue
> > because the freeze heuristic has a fallback strategy when there aren't
> > enough stats to do its calculations. But other users of this
> > LSNTimeline will simply get highly inaccurate results (I think?). Is
> > there anything we could do about this? It seems bad.
>
> A complication with this over stats is that we can't recompute this in case of
> a crash/corruption issue.  The simple solution is to consider this unlogged
> data and start fresh at every unclean shutdown, but I have a feeling that won't
> be good enough for basing heuristics on?

Yes, I still haven't dealt with this yet. Tomas had a list of
suggestions in an upthread email, so I will spend some time thinking
about those next.

It seems like we might be able to come up with some way of calculating
a valid "default" value or "best guess" which could be used whenever
there isn't enough data. Though, if we crash and lose some time stream
data, we won't know that that data was lost due to a crash so we
wouldn't know to use our "best guess" to make up for it. So, maybe I
should try and rebuild the stream using some combination of WAL, clog,
and commit timestamps? Or perhaps I should do some basic WAL logging
just for this data structure.

> > Andres had brought up something at some point about, what if the
> > database is simply turned off for awhile and then turned back on. Even
> > if you cleanly shut down, will there be "gaps" in the timeline? I
> > think that could be okay, but it is something to think about.
>
> The gaps would represent reality, so there is nothing wrong per se with gaps,
> but if they inflate the interval of a bucket which in turns impact the
> precision of the results then that can be a problem.

Yes, actually I added some hacky code to the quick and dirty python
simulator I wrote [2] to test out having a big gap with no updates (if
there is no db activity so nothing for bgwriter to do or the db is off
for a while). And it seemed to basically work fine.

> While the bucketing algorithm is a clever algorithm for degrading precision for
> older entries without discarding them, I do fear that we'll risk ending up with
> answers like "somewhere between in the past and even further in the past".
> I've been playing around with various compression algorithms for packing the
> buckets such that we can retain precision for longer.  Since you were aiming to
> work on other patches leading up to the freeze, let's pick this up again once
> things calm down.

Let me know what you think about the new algorithm. I wonder if for
points older than the second to oldest point, we have the function
return something like "older than a year ago" instead of guessing. The
new method doesn't focus on compressing old data though.

> When compiling I got this warning for lsntime_merge_target:
>
> pgstat_wal.c:242:1: warning: non-void function does not return a value in all control paths [-Wreturn-type]
> }
> ^
> 1 warning generated.
>
> The issue seems to be that the can't-really-happen path is protected with an
> Assertion, which is a no-op for production builds.  I think we should handle
> the error rather than rely on testing catching it (since if it does happen even
> though it can't, it's going to be when it's for sure not tested).  Returning an
> invalid array subscript like -1 and testing for it in lsntime_insert, with an
> elog(WARNING..), seems enough.
>
>
> -    last_snapshot_lsn <= GetLastImportantRecPtr())
> +    last_snapshot_lsn <= current_lsn)
> I think we should delay extracting the LSN with GetLastImportantRecPtr until we
> know that we need it, to avoid taking locks in this codepath unless needed.
>
> I've attached a diff with the above suggestions which applies on op of your
> patchset.

I've implemented these review points in the attached v4.

- Melanie

[1] https://www.postgresql.org/message-id/CAAKRu_YbbZGz-X_pm2zXJA%2B6A22YYpaWhOjmytqFL1yF_FCv6w%40mail.gmail.com
[2] https://gist.github.com/melanieplageman/7400e81bbbd518fe08b4af55a9b632d4

Attachment

Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
Thanks so much Bharath, Andrey, and Ilya for the review!

I've posted a new version here [1] which addresses some of your
concerns. I'll comment on those it does not address inline.

On Thu, May 30, 2024 at 1:26 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
> === Questions ===
> 1. The patch does not handle server restart. All pages will need freeze after any crash?

I haven't fixed this yet. See my email for some thoughts on what I
should do here.

> 2. Some benchmarks to proof the patch does not have CPU footprint.

This is still a todo. Typically when designing a benchmark like this,
I would want to pick a worst-case workload to see how bad it could be.
I wonder if just a write heavy workload like pgbench builtin tpcb-like
would be sufficient?

> === Nits ===
> "Timeline" term is already taken.

I changed it to LSNTimeStream. What do you think?

> The patch needs rebase due to some header changes.

I did this.

> Tests fail on Windows.

I think this was because of the compiler warnings, but I need to
double-check now.

> The patch lacks tests.

I thought about this a bit. I wonder what kind of tests make sense.

I could
1) Add tests with the existing stats tests
(src/test/regress/sql/stats.sql) and just test that bgwriter is in
fact adding to the time stream.

2) Or should I add some infrastructure to be able to create an
LSNTimeStream and then insert values to it and do some validations of
what is added? I did a version of this but it is just much more
annoying with C & SQL than with python (where I tried out my
algorithms) [2].

> Some docs would be nice, but the feature is for developers.

I added some docs.

> Mapping is protected for multithreaded access by walstats LWlock and might have tuplestore_putvalues() under that
lock.That might be a little dangerous, if tuplestore will be on-disk for some reason (should not happen). 

Ah, great point! I forgot about the *fetch_stat*() functions. I used
pgstat_fetch_stat_wal() in the latest version so I have a local copy
that I can stuff into the tuplestore without any locking. It won't be
as up-to-date, but I think that is 100% okay for this function.

- Melanie

[1] https://www.postgresql.org/message-id/CAAKRu_a6WSkWPtJCw%3DW%2BP%2Bg-Fw9kfA_t8sMx99dWpMiGHCqJNA%40mail.gmail.com
[2] https://gist.github.com/melanieplageman/95126993bcb43d4b4042099e9d0ccc11



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Wed, Jun 26, 2024 at 10:04 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:
>
> I've implemented these review points in the attached v4.

I realized the docs had a compilation error. Attached v5 fixes that as
well as three bugs I found while using this patch set more intensely
today.

I see Michael has been working on some crash safety for stats here
[1]. I wonder if that would be sufficient for the LSNTimeStream. I
haven't examined his patch functionality yet, though.

I also had an off-list conversation with Robert where he suggested I
could perhaps change the user-facing functions for estimating an
LSN/time conversion to instead return a floor and a ceiling -- instead
of linearly interpolating a guess. This would be a way to keep users
from misunderstanding the accuracy of the functions to translate LSN
<-> time. I'm interested in what others think of this.

I like this idea a lot because it allows me to worry less about how I
decide to compress the data and whether or not it will be accurate for
use cases different than my own (the opportunistic freezing
heuristic). If I can provide a floor and a ceiling that are definitely
accurate, I don't have to worry about misleading people.

- Melanie

[1] https://www.postgresql.org/message-id/ZnEiqAITL-VgZDoY%40paquier.xyz

Attachment

Re: Add LSN <-> time conversion functionality

From
Andrey M. Borodin
Date:
Hi!

I’m doing another iteration over the patchset.

PgStartLSN = GetXLogInsertRecPtr();
Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a crash
recoveryof 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted? 

lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And the
functionmay return negative result, despite claiming area as a result. It’s intended, but a little misleading. 

i-- > 0
Is there a point to do a backward count in the loop?
Consider dropping not one by one, but half of a stream, LSNTimeStream is ~2Kb of cache and it’s loaded as a whole to
thecache.. 



> On 27 Jun 2024, at 07:18, Melanie Plageman <melanieplageman@gmail.com> wrote:
>
>> 2. Some benchmarks to proof the patch does not have CPU footprint.
>
> This is still a todo. Typically when designing a benchmark like this,
> I would want to pick a worst-case workload to see how bad it could be.
> I wonder if just a write heavy workload like pgbench builtin tpcb-like
> would be sufficient?

Increasing background writer activity to maximum and not seeing LSNTimeStream function in `perf top` seems enough to
me.

>
>> === Nits ===
>> "Timeline" term is already taken.
>
> I changed it to LSNTimeStream. What do you think?
Sounds good to me.


>
>> Tests fail on Windows.
>
> I think this was because of the compiler warnings, but I need to
> double-check now.
Nope, it really looks more serious.
[12:31:25.701] FAILED: src/backend/postgres_lib.a.p/utils_activity_pgstat_wal.c.obj
[12:31:25.701] "cl" "-Isrc\backend\postgres_lib.a.p" "-Isrc\include" "-I..\src\include" "-Ic:\openssl\1.1\include"
"-I..\src\include\port\win32""-I..\src\include\port\win32_msvc" "/MDd" "/FIpostgres_pch.h" "/Yupostgres_pch.h"
"/Fpsrc\backend\postgres_lib.a.p\postgres_pch.pch""/nologo" "/showIncludes" "/utf-8" "/W2" "/Od" "/Zi" "/DWIN32"
"/DWINDOWS""/D__WINDOWS__" "/D__WIN32__" "/D_CRT_SECURE_NO_DEPRECATE" "/D_CRT_NONSTDC_NO_DEPRECATE" "/wd4018" "/wd4244"
"/wd4273""/wd4101" "/wd4102" "/wd4090" "/wd4267" "-DBUILDING_DLL" "/FS"
"/FdC:\cirrus\build\src\backend\postgres_lib.pdb"/Fosrc/backend/postgres_lib.a.p/utils_activity_pgstat_wal.c.obj "/c"
../src/backend/utils/activity/pgstat_wal.c
[12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(433): error C2375: 'pg_estimate_lsn_at_time': redefinition;
differentlinkage 
[12:31:25.701] c:\cirrus\build\src\include\utils/fmgrprotos.h(2906): note: see declaration of 'pg_estimate_lsn_at_time'
[12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(434): error C2375: 'pg_estimate_time_at_lsn': redefinition;
differentlinkage 
[12:31:25.701] c:\cirrus\build\src\include\utils/fmgrprotos.h(2905): note: see declaration of 'pg_estimate_time_at_lsn'
[12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(435): error C2375: 'pg_lsntime_stream': redefinition;
differentlinkage 
[12:31:25.858] c:\cirrus\build\src\include\utils/fmgrprotos.h(2904): note: see declaration of 'pg_lsntime_stream'


>
>> The patch lacks tests.
>
> I thought about this a bit. I wonder what kind of tests make sense.
>
> I could
> 1) Add tests with the existing stats tests
> (src/test/regress/sql/stats.sql) and just test that bgwriter is in
> fact adding to the time stream.
>
> 2) Or should I add some infrastructure to be able to create an
> LSNTimeStream and then insert values to it and do some validations of
> what is added? I did a version of this but it is just much more
> annoying with C & SQL than with python (where I tried out my
> algorithms) [2].

I think just a test which calls functions and discards the result would greatly increase coverage.


> On 29 Jun 2024, at 03:09, Melanie Plageman <melanieplageman@gmail.com> wrote:
> change the user-facing functions for estimating an
> LSN/time conversion to instead return a floor and a ceiling -- instead
> of linearly interpolating a guess. This would be a way to keep users
> from misunderstanding the accuracy of the functions to translate LSN
> <-> time.

I think this is a good idea. And it covers well “server restart problem”. If API just returns -inf as a boundary,
callercan correctly interpret this situation. 

Thanks! Looking forward to more timely freezing.


Best regards, Andrey Borodin.


Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
Thanks for the review! v6 attached.

On Sat, Jul 6, 2024 at 1:36 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
> PgStartLSN = GetXLogInsertRecPtr();
> Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a crash
recoveryof 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted? 

So, I don't think we will allow use of the LSNTimeStream on a standby,
since it is unclear what it would mean on a standby. For example, do
you want to know the time the LSN was generated or the time it was
replayed? Note that bgwriter won't insert values to the time stream on
a standby (it explicitly checks).

But, you bring up an issue that I don't quite know what to do about.
If the standby doesn't have an LSNTimeStream, then when it is
promoted, LSN <-> time conversions of LSNs and times before the
promotion seem impossible. Maybe if the stats file is getting written
out at checkpoints, we could restore from that previous primary's file
after promotion?

This brings up a second issue, which is that, currently, bgwriter
won't insert into the time stream when wal_level is minimal. I'm not
sure exactly how to resolve it because I am relying on the "last
important rec pointer" and the LOG_SNAPSHOT_INTERVAL_MS to throttle
when the bgwriter actually inserts new records into the LSNTimeStream.
I could come up with a different timeout interval for updating the
time stream. Perhaps I should do that?

> lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And the
functionmay return negative result, despite claiming area as a result. It’s intended, but a little misleading. 

Ah, great point. I've fixed this.

> i-- > 0
> Is there a point to do a backward count in the loop?
> Consider dropping not one by one, but half of a stream, LSNTimeStream is ~2Kb of cache and it’s loaded as a whole to
thecache.. 

Yes, the backwards looping was super confusing. It was a relic of my
old design. Even without your point about cache locality, the code is
much harder to understand with the backwards looping. I've changed the
array to fill forwards and be accessed with forward loops.

> > On 27 Jun 2024, at 07:18, Melanie Plageman <melanieplageman@gmail.com> wrote:
> >
> >> 2. Some benchmarks to proof the patch does not have CPU footprint.
> >
> > This is still a todo. Typically when designing a benchmark like this,
> > I would want to pick a worst-case workload to see how bad it could be.
> > I wonder if just a write heavy workload like pgbench builtin tpcb-like
> > would be sufficient?
>
> Increasing background writer activity to maximum and not seeing LSNTimeStream function in `perf top` seems enough to
me.

I've got this on my TODO.

> >> Tests fail on Windows.
> >
> > I think this was because of the compiler warnings, but I need to
> > double-check now.
> Nope, it really looks more serious.
> [12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(433): error C2375: 'pg_estimate_lsn_at_time': redefinition;
differentlinkage 

Ah, yes. I mistakenly added the functions to pg_proc.dat and also
called PG_FUNCTION_INFO_V1 for the functions. I've fixed it.

> >> The patch lacks tests.
> >
> > I thought about this a bit. I wonder what kind of tests make sense.
> >
> > I could
> > 1) Add tests with the existing stats tests
> > (src/test/regress/sql/stats.sql) and just test that bgwriter is in
> > fact adding to the time stream.
> >
> > 2) Or should I add some infrastructure to be able to create an
> > LSNTimeStream and then insert values to it and do some validations of
> > what is added? I did a version of this but it is just much more
> > annoying with C & SQL than with python (where I tried out my
> > algorithms) [2].
>
> I think just a test which calls functions and discards the result would greatly increase coverage.

I've added tests of the two main conversion functions. I didn't add a
test of the function which gets the whole stream (pg_lsntime_stream())
because I don't think I can guarantee it won't be empty -- so I'm not
sure what I could do with it in a test.

> > On 29 Jun 2024, at 03:09, Melanie Plageman <melanieplageman@gmail.com> wrote:
> > change the user-facing functions for estimating an
> > LSN/time conversion to instead return a floor and a ceiling -- instead
> > of linearly interpolating a guess. This would be a way to keep users
> > from misunderstanding the accuracy of the functions to translate LSN
> > <-> time.
>
> I think this is a good idea. And it covers well “server restart problem”. If API just returns -inf as a boundary,
callercan correctly interpret this situation. 

Providing "ceiling" and "floor" user functions is still a TODO for me,
however, I think that the patch mostly does handle server restarts.

In the event of a restart, the cumulative stats system will have
persisted our time stream, so the LSNTimeStream will just be read back
in with the rest of the stats. I've added logic to ensure that if the
PgStartLSN is newer than our oldest LSNTimeStream entry, we use the
oldest entry instead of PgStartLSN when doing conversions LSN <->
time.

As for a crash, stats do not persist crashes, but I think Michael's
patch will go in to write out the stats file at checkpoints, and then
this should be good enough.

Is there anything else you think that is an issue with restarts?

> Thanks! Looking forward to more timely freezing.

Thanks! I'll be posting a new version of the opportunistic freezing
patch that uses the time stream quite soon, so I hope you'll take a
look at that as well!

- Melanie

Attachment

Re: Add LSN <-> time conversion functionality

From
"Andrey M. Borodin"
Date:
This is a copy of my message for pgsql-hackers mailing list. Unfortunately original message was rejected due to one of
recipientsaddresses is blocked. 

> On 1 Aug 2024, at 10:54, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
>
>
>> On 1 Aug 2024, at 05:44, Melanie Plageman <melanieplageman@gmail.com> wrote:
>>
>> Thanks for the review! v6 attached.
>>
>> On Sat, Jul 6, 2024 at 1:36 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>>>
>>> PgStartLSN = GetXLogInsertRecPtr();
>>> Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a
crashrecovery of 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted? 
>>
>> So, I don't think we will allow use of the LSNTimeStream on a standby,
>> since it is unclear what it would mean on a standby. For example, do
>> you want to know the time the LSN was generated or the time it was
>> replayed? Note that bgwriter won't insert values to the time stream on
>> a standby (it explicitly checks).
>
> Yes, I mentioned Standby because PgStartLSN is not what it says it is.
>
>>
>> But, you bring up an issue that I don't quite know what to do about.
>> If the standby doesn't have an LSNTimeStream, then when it is
>> promoted, LSN <-> time conversions of LSNs and times before the
>> promotion seem impossible. Maybe if the stats file is getting written
>> out at checkpoints, we could restore from that previous primary's file
>> after promotion?
>
> I’m afraid that clocks of a Primary from previous timeline might be not in sync with ours.
> It’s OK if it causes error, we just need to be prepared when they indicate values from future. Perhaps, by shifting
theirlast point to our “PgStartLSN”. 
>
>>
>> This brings up a second issue, which is that, currently, bgwriter
>> won't insert into the time stream when wal_level is minimal. I'm not
>> sure exactly how to resolve it because I am relying on the "last
>> important rec pointer" and the LOG_SNAPSHOT_INTERVAL_MS to throttle
>> when the bgwriter actually inserts new records into the LSNTimeStream.
>> I could come up with a different timeout interval for updating the
>> time stream. Perhaps I should do that?
>
> IDK. My knowledge of bgwriter is not enough to give a meaningful advise here.
>
>>
>>> lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And
thefunction may return negative result, despite claiming area as a result. It’s intended, but a little misleading. 
>>
>> Ah, great point. I've fixed this.
>
> Well, not exactly. Result of lsn_ts_calculate_error_area() is still fabs()’ed upon usage. I’d recommend to fabs in
function.
> BTW lsn_ts_calculate_error_area() have no prototype.
>
> Also, I’m not a big fan of using IEEE 754 float in this function. This data type have 24 bits of significand bits.
> Consider that current timestamp has 50 binary digits. Let’s assume realistic LSNs have same 50 bits.
> Then our rounding error is 2^76 byte*microseconds.
> Let’s assume we are interested to measure time on a scale of 1GB WAL records.
> This gives us rounding error of 2^46 microseconds = 2^26 seconds = 64 million seconds = 2 years.
> Seems like a gross error.
>
> If we use IEEE 754 doubles we have 53 significand bytes. And rounding error will be on a scale of 128 microseconds
perGB, which is acceptable. 
>
> So I think double is better than float here.
>
> Nitpicking, but I’d prefer to sum up (triangle2 + triangle3 + rectangle_part) before subtracting. This can save a bit
ofprecision (smaller figures can have lesser exponent). 
>
>
>>>> On 29 Jun 2024, at 03:09, Melanie Plageman <melanieplageman@gmail.com> wrote:
>>>> change the user-facing functions for estimating an
>>>> LSN/time conversion to instead return a floor and a ceiling -- instead
>>>> of linearly interpolating a guess. This would be a way to keep users
>>>> from misunderstanding the accuracy of the functions to translate LSN
>>>> <-> time.
>>>
>>> I think this is a good idea. And it covers well “server restart problem”. If API just returns -inf as a boundary,
callercan correctly interpret this situation. 
>>
>> Providing "ceiling" and "floor" user functions is still a TODO for me,
>> however, I think that the patch mostly does handle server restarts.
>>
>> In the event of a restart, the cumulative stats system will have
>> persisted our time stream, so the LSNTimeStream will just be read back
>> in with the rest of the stats. I've added logic to ensure that if the
>> PgStartLSN is newer than our oldest LSNTimeStream entry, we use the
>> oldest entry instead of PgStartLSN when doing conversions LSN <->
>> time.
>>
>> As for a crash, stats do not persist crashes, but I think Michael's
>> patch will go in to write out the stats file at checkpoints, and then
>> this should be good enough.
>>
>> Is there anything else you think that is an issue with restarts?
>
> Nope, looks good to me.
>
>>
>>> Thanks! Looking forward to more timely freezing.
>>
>> Thanks! I'll be posting a new version of the opportunistic freezing
>> patch that uses the time stream quite soon, so I hope you'll take a
>> look at that as well!
>
> Great! Thank you!
> Besides your TODOs and my nitpicking this patch series looks RfC to me.
>
> I have to address some review comments on my patches, then I hope I’ll switch to reviewing opportunistic freezing.
>
>
> Best regards, Andrey Borodin.





Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
Attached v7 changes the SQL-callable functions to return ranges of
LSNs and times covering the target time or LSN instead of linearly
interpolating an approximate answer.

I also changed the frequency and conditions under which the background
writer updates the global LSNTimeStream. There is now a dedicated
interval at which the LSNTimeStream is updated (instead of reusing the
log standby snapshot interval).

I also found that it is incorrect to set PgStartLSN to the insert LSN
in PostmasterMain(). The XLog buffer cache is not guaranteed to be
initialized in time. Instead of trying to provide an LSN lower bound
for locating times before those recorded on the global LSNTimeStream,
I simply return a lower bound of InvalidXLogRecPtr. Similarly, I
provide a lower bound of -infinity when locating LSNs before those
recorded on the global LSNTimeStream.

On Thu, Aug 1, 2024 at 3:55 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
> > On 1 Aug 2024, at 05:44, Melanie Plageman <melanieplageman@gmail.com> wrote:
> >
> > On Sat, Jul 6, 2024 at 1:36 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
> >>
> >> PgStartLSN = GetXLogInsertRecPtr();
> >> Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a
crashrecovery of 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted? 
> >
> > So, I don't think we will allow use of the LSNTimeStream on a standby,
> > since it is unclear what it would mean on a standby. For example, do
> > you want to know the time the LSN was generated or the time it was
> > replayed? Note that bgwriter won't insert values to the time stream on
> > a standby (it explicitly checks).
>
> Yes, I mentioned Standby because PgStartLSN is not what it says it is.

Right, I've found another way of dealing with this since PgStartLSN
was incorrect.

> > But, you bring up an issue that I don't quite know what to do about.
> > If the standby doesn't have an LSNTimeStream, then when it is
> > promoted, LSN <-> time conversions of LSNs and times before the
> > promotion seem impossible. Maybe if the stats file is getting written
> > out at checkpoints, we could restore from that previous primary's file
> > after promotion?
>
> I’m afraid that clocks of a Primary from previous timeline might be not in sync with ours.
> It’s OK if it causes error, we just need to be prepared when they indicate values from future. Perhaps, by shifting
theirlast point to our “PgStartLSN”. 

Regarding a standby being promoted. I plan to make a version of the
LSNTimeStream functions which works on a standby by using
getRecordTimestamp() and inserts an LSN from the last record replayed
and the associated timestamp. That should mean the LSNTimeStream on
the standby is roughly the same as the one on the primary since those
records were inserted on the primary.

As for time going backwards in general, I've now made it so that
values are only inserted if the times are monotonically increasing and
the LSN is the same or increasing. This should handle time going
backwards, either on the primary itself or after a standby is promoted
if the timeline wasn't a perfect match.

> > This brings up a second issue, which is that, currently, bgwriter
> > won't insert into the time stream when wal_level is minimal. I'm not
> > sure exactly how to resolve it because I am relying on the "last
> > important rec pointer" and the LOG_SNAPSHOT_INTERVAL_MS to throttle
> > when the bgwriter actually inserts new records into the LSNTimeStream.
> > I could come up with a different timeout interval for updating the
> > time stream. Perhaps I should do that?
>
> IDK. My knowledge of bgwriter is not enough to give a meaningful advise here.

See my note at top of the email.

> >> lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And
thefunction may return negative result, despite claiming area as a result. It’s intended, but a little misleading. 
> >
> > Ah, great point. I've fixed this.
>
> Well, not exactly. Result of lsn_ts_calculate_error_area() is still fabs()’ed upon usage. I’d recommend to fabs in
function.
> BTW lsn_ts_calculate_error_area() have no prototype.
>
> Also, I’m not a big fan of using IEEE 754 float in this function. This data type have 24 bits of significand bits.
> Consider that current timestamp has 50 binary digits. Let’s assume realistic LSNs have same 50 bits.
> Then our rounding error is 2^76 byte*microseconds.
> Let’s assume we are interested to measure time on a scale of 1GB WAL records.
> This gives us rounding error of 2^46 microseconds = 2^26 seconds = 64 million seconds = 2 years.
> Seems like a gross error.
>
> If we use IEEE 754 doubles we have 53 significand bytes. And rounding error will be on a scale of 128 microseconds
perGB, which is acceptable. 
>
> So I think double is better than float here.
>
> Nitpicking, but I’d prefer to sum up (triangle2 + triangle3 + rectangle_part) before subtracting. This can save a bit
ofprecision (smaller figures can have lesser exponent). 

Okay, thanks for the detail. See what you think about v7.

Some perf testing of bgwriter updates are still a todo. I was thinking
that it might be bad to take an exclusive lock on the WAL stats data
structure for the entire time I am inserting a new value to the
LSNTimeStream. I was thinking maybe I should take a share lock and
calculate which element to drop first and then take the exclusive
lock? Or maybe I should make a separate lock for just the stream
member of PgStat_WalStats. Maybe it isn't worth it? I'm not sure.

- Melanie

Attachment

Re: Add LSN <-> time conversion functionality

From
Robert Haas
Date:
Melanie,

As I mentioned to you off-list, I feel like this needs some sort of
recency bias. Certainly vacuum, and really almost any conceivable user
of this facility, is going to care more about accurate answers for new
data than for old data. If there's no recency bias, then I think that
eventually answers for more recent LSNs will start to become less
accurate, since they've got to share the data structure with more and
more time from long ago. I don't think you've done anything about this
in this version of the patch, but I might be wrong.

One way to make the standby more accurately mimic the primary would be
to base entries on the timestamp-LSN data that is already present in
the WAL, i.e. {COMMIT|ABORT} [PREPARED] records. If you only added or
updated entries on the primary when logging those records, the standby
could redo exactly what the primary did. A disadvantage of this
approach is that if there are no commits for a while then your mapping
gets out of date, but that might be something we could just tolerate.
Another possible solution is to log the changes you make on the
primary and have the standby replay those changes. Perhaps I'm wrong
to advocate for such solutions, but it feels error-prone to have one
algorithm for the primary and a different algorithm for the standby.
You now basically have two things that can break and you have to debug
what went wrong instead of just one.

In terms of testing this, I advocate not so much performance testing
as accuracy testing. So for example if you intentionally change the
LSN consumption rate during your test, e.g. high LSN consumption rate
for a while, then low for while, then high again for a while, and then
graph the contents of the final data structure, how well does the data
structure model what actually happened? Honestly, my whole concern
here is really around the lack of recency bias. If you simply took a
sample every N seconds until the buffer was full and then repeatedly
thinned the data by throwing away every other sample from the older
half of the buffer, then it would be self-evident that accuracy for
the older data was going to degrade over time, but also that accuracy
for new data wasn't going to degrade no matter how long you ran the
algorithm, simply because the newest half of the data never gets
thinned. But because you've chosen to throw away the point that leads
to the least additional error (on an imaginary request distribution
that is just as likely to care about very old things as it is to care
about new ones), there's nothing to keep the algorithm from getting
into a state where it systematically throws away new data points and
keeps old ones.

To be clear, I'm not saying the algorithm I just mooted is the right
one or that it has no weaknesses; for example, it needlessly throws
away precision that it doesn't have to lose when the rate of LSN
consumption is constant for a long time. I don't think that
necessarily matters because the algorithm doesn't need to be as
accurate as possible; it just needs to be accurate enough to get the
job done.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <robertmhaas@gmail.com> wrote:
>
> As I mentioned to you off-list, I feel like this needs some sort of
> recency bias. Certainly vacuum, and really almost any conceivable user
> of this facility, is going to care more about accurate answers for new
> data than for old data. If there's no recency bias, then I think that
> eventually answers for more recent LSNs will start to become less
> accurate, since they've got to share the data structure with more and
> more time from long ago. I don't think you've done anything about this
> in this version of the patch, but I might be wrong.

That makes sense. This version of the patch set doesn't have a recency
bias implementation. I plan to work on it but will need to do the
testing like you mentioned.

> One way to make the standby more accurately mimic the primary would be
> to base entries on the timestamp-LSN data that is already present in
> the WAL, i.e. {COMMIT|ABORT} [PREPARED] records. If you only added or
> updated entries on the primary when logging those records, the standby
> could redo exactly what the primary did. A disadvantage of this
> approach is that if there are no commits for a while then your mapping
> gets out of date, but that might be something we could just tolerate.
> Another possible solution is to log the changes you make on the
> primary and have the standby replay those changes. Perhaps I'm wrong
> to advocate for such solutions, but it feels error-prone to have one
> algorithm for the primary and a different algorithm for the standby.
> You now basically have two things that can break and you have to debug
> what went wrong instead of just one.

Your point about maintaining two different systems for creating the
time stream being error prone makes sense. Honestly logging the
contents of the LSNTimeStream seems like it will be the simplest to
maintain and understand. I was a bit apprehensive to WAL log one part
of a single stats structure (since the other stats aren't logged), but
I think explaining why that's done is easier than explaining separate
LSNTimeStream creation code for replicas.

- Melanie



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
On 8/7/24 21:39, Melanie Plageman wrote:
> On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <robertmhaas@gmail.com> wrote:
>>
>> As I mentioned to you off-list, I feel like this needs some sort of
>> recency bias. Certainly vacuum, and really almost any conceivable user
>> of this facility, is going to care more about accurate answers for new
>> data than for old data. If there's no recency bias, then I think that
>> eventually answers for more recent LSNs will start to become less
>> accurate, since they've got to share the data structure with more and
>> more time from long ago. I don't think you've done anything about this
>> in this version of the patch, but I might be wrong.
> 
> That makes sense. This version of the patch set doesn't have a recency
> bias implementation. I plan to work on it but will need to do the
> testing like you mentioned.
> 

I agree that it's likely we probably want more accurate results for
recent data, so some recency bias makes sense - for example for the
eager vacuuming that's definitely true.

But this was initially presented as a somewhat universal LSN/timestamp
mapping, and in that case it might make sense to minimize the average
error - which I think is what lsntime_to_drop() currently does, by
calculating the "area" etc.

Maybe it'd be good to approach this from the opposite direction, say
what "accuracy guarantees" we want to provide, and then design the
structure / algorithm to ensure that. Otherwise we may end up with an
infinite discussion about algorithms with unclear idea which one is the
best choice.

And I'm sure "users" of the LSN/Timestamp mapping may get confused about
what to expect, without reasonably clear guarantees.

For example, it seems to me a "good" accuracy guarantee would be:

   Given a LSN, the age of the returned timestamp is less than 10% off
   the actual timestamp. The timestamp precision is in seconds.

This means that if LSN was written 100 seconds ago, it would be OK to
get an answer in the 90-110 seconds range. For LSN from 1h ago, the
acceptable range would be 3600s +/- 360s. And so on. The 10% is just
arbitrary, maybe it should be lower - doesn't matter much.

How could we do this? We have 1s precision, so we start with buckets for
each seconds. And we want to allow merging stuff nicely. The smallest
merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do
1s -> 10s -> 100s -> 1000s instead.

So we start with 100x one-second buckets

[A_0, A_1, ..., A_99]  -> 100 x 1s buckets
[B_0, B_1, ..., B_99]  -> 100 x 10s buckets
[C_0, C_1, ..., C_99]  -> 100 x 100s buckets
[D_0, D_1, ..., D_99]  -> 100 x 1000s buckets

We start by adding data into A_k buckets. After filling all 100 of them,
we grab the oldest 10 buckets, and combine/move them into B_k. And so
on, until B is gets full too. Then we grab the 10 oldest B_k entries,
and move them into C. and so on. For D the oldest entries would get
discarded, or we could add another layer with each bucket representing
10k seconds.

A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need
(or want) to keep a longer history?

These arrays are larger than what the current patch does, ofc. That has
64 x 16B entries, so 1kB. These arrays have ~6kB - but I'm pretty sure
it could be made more compact, by growing the buckets slower. With 10x
it's just simpler to think about, and also - 6kB seems pretty decent.

Note: I just realized the patch does LOG_STREAM_INTERVAL_MS = 30s, so
the 1s accuracy seems like an overkill, and it could be much smaller.


>> One way to make the standby more accurately mimic the primary would be
>> to base entries on the timestamp-LSN data that is already present in
>> the WAL, i.e. {COMMIT|ABORT} [PREPARED] records. If you only added or
>> updated entries on the primary when logging those records, the standby
>> could redo exactly what the primary did. A disadvantage of this
>> approach is that if there are no commits for a while then your mapping
>> gets out of date, but that might be something we could just tolerate.
>> Another possible solution is to log the changes you make on the
>> primary and have the standby replay those changes. Perhaps I'm wrong
>> to advocate for such solutions, but it feels error-prone to have one
>> algorithm for the primary and a different algorithm for the standby.
>> You now basically have two things that can break and you have to debug
>> what went wrong instead of just one.
> 
> Your point about maintaining two different systems for creating the
> time stream being error prone makes sense. Honestly logging the
> contents of the LSNTimeStream seems like it will be the simplest to
> maintain and understand. I was a bit apprehensive to WAL log one part
> of a single stats structure (since the other stats aren't logged), but
> I think explaining why that's done is easier than explaining separate
> LSNTimeStream creation code for replicas.
> 

Isn't this a sign this does not quite fit into pgstats? Even if this
happens to deal with unsafe restarts, replica promotions and so on, what
if the user just does pg_stat_reset? That already causes trouble because
we simply forget deleted/updated/inserted tuples. If we also forget data
used for freezing heuristics, that does not seem great ...

Wouldn't it be better to write this into WAL as part of a checkpoint (or
something like that?), and make bgwriter to not only add LSN/timestamp
into the stream, but also write it into WAL. It's already waking up, on
idle systems ~32B written to WAL does not matter, and on busy system
it's just noise.


regards

-- 
Tomas Vondra



Re: Add LSN <-> time conversion functionality

From
Robert Haas
Date:
On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
> How could we do this? We have 1s precision, so we start with buckets for
> each seconds. And we want to allow merging stuff nicely. The smallest
> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do
> 1s -> 10s -> 100s -> 1000s instead.
>
> So we start with 100x one-second buckets
>
> [A_0, A_1, ..., A_99]  -> 100 x 1s buckets
> [B_0, B_1, ..., B_99]  -> 100 x 10s buckets
> [C_0, C_1, ..., C_99]  -> 100 x 100s buckets
> [D_0, D_1, ..., D_99]  -> 100 x 1000s buckets
>
> We start by adding data into A_k buckets. After filling all 100 of them,
> we grab the oldest 10 buckets, and combine/move them into B_k. And so
> on, until B is gets full too. Then we grab the 10 oldest B_k entries,
> and move them into C. and so on. For D the oldest entries would get
> discarded, or we could add another layer with each bucket representing
> 10k seconds.

Yeah, this kind of algorithm makes sense to me, although as you say
later, I don't think we need this amount of precision. I also think
you're right to point out that this provides certain guaranteed
behavior.

> A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need
> (or want) to keep a longer history?

I think there is a difference of opinion about this between Melanie
and I. I feel like we should be designing something that does the
exact job we need done for the freezing stuff, and if anyone else can
use it, that's a bonus. For that, I feel that 300h is more than
plenty. The goal of the freezing stuff, roughly speaking, is to answer
the question "will this be unfrozen real soon?". "Real soon" could
arguably mean a minute or an hour, but I don't think it makes sense
for it to be a week. If we're freezing data now that has a good chance
of being unfrozen again within 7 days, we should just freeze it
anyway. The cost of freezing isn't really all that high. If we keep
freezing pages that are going to be unfrozen again within seconds or
minutes, we pay those freezing costs enough times that they become
material, but I have difficulty imagining that it ever matters if we
re-freeze the same page every week. It's OK to be wrong as long as we
aren't wrong too often, and I think that being wrong once per page per
week isn't too often.

But I think Melanie was hoping to create something more general, which
on one level is understandable, but on the other hand it's unclear
what the goals are exactly. If we limit our scope to specifically
VACUUM, we can make reasonable guesses about how much precision we
need and for how long. But a hypothetical other client of this
infrastructure could need anything at all, which makes it very unclear
what the best design is, IMHO.

> Isn't this a sign this does not quite fit into pgstats? Even if this
> happens to deal with unsafe restarts, replica promotions and so on, what
> if the user just does pg_stat_reset? That already causes trouble because
> we simply forget deleted/updated/inserted tuples. If we also forget data
> used for freezing heuristics, that does not seem great ...

+1.

> Wouldn't it be better to write this into WAL as part of a checkpoint (or
> something like that?), and make bgwriter to not only add LSN/timestamp
> into the stream, but also write it into WAL. It's already waking up, on
> idle systems ~32B written to WAL does not matter, and on busy system
> it's just noise.

I am not really sure of the best place to put this data. I agree that
pgstat doesn't feel like quite the right place. But I'm not quite sure
that putting it into every checkpoint is the right idea either.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
On 8/8/24 20:59, Robert Haas wrote:
> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
>> How could we do this? We have 1s precision, so we start with buckets for
>> each seconds. And we want to allow merging stuff nicely. The smallest
>> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do
>> 1s -> 10s -> 100s -> 1000s instead.
>>
>> So we start with 100x one-second buckets
>>
>> [A_0, A_1, ..., A_99]  -> 100 x 1s buckets
>> [B_0, B_1, ..., B_99]  -> 100 x 10s buckets
>> [C_0, C_1, ..., C_99]  -> 100 x 100s buckets
>> [D_0, D_1, ..., D_99]  -> 100 x 1000s buckets
>>
>> We start by adding data into A_k buckets. After filling all 100 of them,
>> we grab the oldest 10 buckets, and combine/move them into B_k. And so
>> on, until B is gets full too. Then we grab the 10 oldest B_k entries,
>> and move them into C. and so on. For D the oldest entries would get
>> discarded, or we could add another layer with each bucket representing
>> 10k seconds.
> 
> Yeah, this kind of algorithm makes sense to me, although as you say
> later, I don't think we need this amount of precision. I also think
> you're right to point out that this provides certain guaranteed
> behavior.
> 
>> A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need
>> (or want) to keep a longer history?
> 
> I think there is a difference of opinion about this between Melanie
> and I. I feel like we should be designing something that does the
> exact job we need done for the freezing stuff, and if anyone else can
> use it, that's a bonus. For that, I feel that 300h is more than
> plenty. The goal of the freezing stuff, roughly speaking, is to answer
> the question "will this be unfrozen real soon?". "Real soon" could
> arguably mean a minute or an hour, but I don't think it makes sense
> for it to be a week. If we're freezing data now that has a good chance
> of being unfrozen again within 7 days, we should just freeze it
> anyway. The cost of freezing isn't really all that high. If we keep
> freezing pages that are going to be unfrozen again within seconds or
> minutes, we pay those freezing costs enough times that they become
> material, but I have difficulty imagining that it ever matters if we
> re-freeze the same page every week. It's OK to be wrong as long as we
> aren't wrong too often, and I think that being wrong once per page per
> week isn't too often.
> 
> But I think Melanie was hoping to create something more general, which
> on one level is understandable, but on the other hand it's unclear
> what the goals are exactly. If we limit our scope to specifically
> VACUUM, we can make reasonable guesses about how much precision we
> need and for how long. But a hypothetical other client of this
> infrastructure could need anything at all, which makes it very unclear
> what the best design is, IMHO.
> 

I don't have a strong opinion on this. I agree with you it's better to
have a good solution for the problem at hand than a poor solution for
hypothetical use cases. I don't have a clear idea what the other use
cases would be, which makes it hard to say what precision/history would
be necessary. But I also understand the wish to make it useful for a
wider set of use cases, when possible. I'd try to do the same thing.

But I think a clear description of the precision guarantees helps to
achieve that (even if the algorithm could be different).

If the only argument ends up being about how precise it needs to be and
how much history we need to cover, I think that's fine because that's
just a matter of setting a couple config parameters.

>> Isn't this a sign this does not quite fit into pgstats? Even if this
>> happens to deal with unsafe restarts, replica promotions and so on, what
>> if the user just does pg_stat_reset? That already causes trouble because
>> we simply forget deleted/updated/inserted tuples. If we also forget data
>> used for freezing heuristics, that does not seem great ...
> 
> +1.
> 
>> Wouldn't it be better to write this into WAL as part of a checkpoint (or
>> something like that?), and make bgwriter to not only add LSN/timestamp
>> into the stream, but also write it into WAL. It's already waking up, on
>> idle systems ~32B written to WAL does not matter, and on busy system
>> it's just noise.
> 
> I am not really sure of the best place to put this data. I agree that
> pgstat doesn't feel like quite the right place. But I'm not quite sure
> that putting it into every checkpoint is the right idea either.
> 

Is there a reason not to make this just another SLRU, just like we do
for commit_ts? I'm not saying it's perfect, but it's an approach we
already use to solve these issues.


regards

-- 
Tomas Vondra



Re: Add LSN <-> time conversion functionality

From
Robert Haas
Date:
On Thu, Aug 8, 2024 at 8:39 PM Tomas Vondra <tomas@vondra.me> wrote:
> Is there a reason not to make this just another SLRU, just like we do
> for commit_ts? I'm not saying it's perfect, but it's an approach we
> already use to solve these issues.

An SLRU is essentially an infinitely large array that grows at one end
and shrinks at the other -- but this is a fixed-size data structure.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
>
> On 8/7/24 21:39, Melanie Plageman wrote:
> > On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <robertmhaas@gmail.com> wrote:
> >>
> >> As I mentioned to you off-list, I feel like this needs some sort of
> >> recency bias. Certainly vacuum, and really almost any conceivable user
> >> of this facility, is going to care more about accurate answers for new
> >> data than for old data. If there's no recency bias, then I think that
> >> eventually answers for more recent LSNs will start to become less
> >> accurate, since they've got to share the data structure with more and
> >> more time from long ago. I don't think you've done anything about this
> >> in this version of the patch, but I might be wrong.
> >
> > That makes sense. This version of the patch set doesn't have a recency
> > bias implementation. I plan to work on it but will need to do the
> > testing like you mentioned.
> >
>
> I agree that it's likely we probably want more accurate results for
> recent data, so some recency bias makes sense - for example for the
> eager vacuuming that's definitely true.
>
> But this was initially presented as a somewhat universal LSN/timestamp
> mapping, and in that case it might make sense to minimize the average
> error - which I think is what lsntime_to_drop() currently does, by
> calculating the "area" etc.
>
> Maybe it'd be good to approach this from the opposite direction, say
> what "accuracy guarantees" we want to provide, and then design the
> structure / algorithm to ensure that. Otherwise we may end up with an
> infinite discussion about algorithms with unclear idea which one is the
> best choice.
>
> And I'm sure "users" of the LSN/Timestamp mapping may get confused about
> what to expect, without reasonably clear guarantees.
>
> For example, it seems to me a "good" accuracy guarantee would be:
>
>    Given a LSN, the age of the returned timestamp is less than 10% off
>    the actual timestamp. The timestamp precision is in seconds.
>
> This means that if LSN was written 100 seconds ago, it would be OK to
> get an answer in the 90-110 seconds range. For LSN from 1h ago, the
> acceptable range would be 3600s +/- 360s. And so on. The 10% is just
> arbitrary, maybe it should be lower - doesn't matter much.

I changed this patch a bit to only provide ranges with an upper and
lower bound from the SQL callable functions. While the size of the
range provided could be part of our "accuracy guarantee", I'm not sure
if we have to provide that.

> How could we do this? We have 1s precision, so we start with buckets for
> each seconds. And we want to allow merging stuff nicely. The smallest
> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do
> 1s -> 10s -> 100s -> 1000s instead.
>
> So we start with 100x one-second buckets
>
> [A_0, A_1, ..., A_99]  -> 100 x 1s buckets
> [B_0, B_1, ..., B_99]  -> 100 x 10s buckets
> [C_0, C_1, ..., C_99]  -> 100 x 100s buckets
> [D_0, D_1, ..., D_99]  -> 100 x 1000s buckets
>
> We start by adding data into A_k buckets. After filling all 100 of them,
> we grab the oldest 10 buckets, and combine/move them into B_k. And so
> on, until B is gets full too. Then we grab the 10 oldest B_k entries,
> and move them into C. and so on. For D the oldest entries would get
> discarded, or we could add another layer with each bucket representing
> 10k seconds.

I originally had an algorithm that stored old values somewhat like
this (each element stored 2x logical members of the preceding
element). When I was testing algorithms, I abandoned this method
because it was less accurate than the method which calculates the
interpolation error "area". But, this would be expected -- it would be
less accurate for older values.

I'm currently considering an algorithm that uses a combination of the
interpolation error and the age of the point. I'm thinking of adding
to or dividing the error of each point by "now - that point's time (or
lsn)". This would lead me to be more likely to drop points that are
older.

This is a bit different than "combining" buckets, but it seems like it
might allow us to drop unneeded recent points when they are very
regular.

> Isn't this a sign this does not quite fit into pgstats? Even if this
> happens to deal with unsafe restarts, replica promotions and so on, what
> if the user just does pg_stat_reset? That already causes trouble because
> we simply forget deleted/updated/inserted tuples. If we also forget data
> used for freezing heuristics, that does not seem great ...
>
> Wouldn't it be better to write this into WAL as part of a checkpoint (or
> something like that?), and make bgwriter to not only add LSN/timestamp
> into the stream, but also write it into WAL. It's already waking up, on
> idle systems ~32B written to WAL does not matter, and on busy system
> it's just noise.

I was imagining adding a new type of WAL record that contains just the
LSN and time and writing it out in bgwriter. Is that not what you are
thinking?

- Melanie



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Thu, Aug 8, 2024 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
> > A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need
> > (or want) to keep a longer history?
>
> I think there is a difference of opinion about this between Melanie
> and I. I feel like we should be designing something that does the
> exact job we need done for the freezing stuff, and if anyone else can
> use it, that's a bonus. For that, I feel that 300h is more than
> plenty. The goal of the freezing stuff, roughly speaking, is to answer
> the question "will this be unfrozen real soon?". "Real soon" could
> arguably mean a minute or an hour, but I don't think it makes sense
> for it to be a week. If we're freezing data now that has a good chance
> of being unfrozen again within 7 days, we should just freeze it
> anyway. The cost of freezing isn't really all that high. If we keep
> freezing pages that are going to be unfrozen again within seconds or
> minutes, we pay those freezing costs enough times that they become
> material, but I have difficulty imagining that it ever matters if we
> re-freeze the same page every week. It's OK to be wrong as long as we
> aren't wrong too often, and I think that being wrong once per page per
> week isn't too often.
>
> But I think Melanie was hoping to create something more general, which
> on one level is understandable, but on the other hand it's unclear
> what the goals are exactly. If we limit our scope to specifically
> VACUUM, we can make reasonable guesses about how much precision we
> need and for how long. But a hypothetical other client of this
> infrastructure could need anything at all, which makes it very unclear
> what the best design is, IMHO.

I'm fine with creating something that is optimized for use with
freezing. I proposed this LSNTimeStream patch as a separate project
because 1) Andres suggested it would be useful for other things 2) it
would make the adaptive freezing project smaller if this goes in
first. The adaptive freezing has two different fuzzy bits (this
LSNTimeStream and then the accumulator which is used to determine if a
page is older than most  pages which were unfrozen too soon). I was
hoping to find an independent use for one of the fuzzy bits to move it
forward.

But, I do think we should optimize the data thinning strategy for
vacuum's adaptive freezing.

- Melanie



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
On 8/9/24 03:29, Melanie Plageman wrote:
> On Thu, Aug 8, 2024 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:
>>
>> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
>>> A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need
>>> (or want) to keep a longer history?
>>
>> I think there is a difference of opinion about this between Melanie
>> and I. I feel like we should be designing something that does the
>> exact job we need done for the freezing stuff, and if anyone else can
>> use it, that's a bonus. For that, I feel that 300h is more than
>> plenty. The goal of the freezing stuff, roughly speaking, is to answer
>> the question "will this be unfrozen real soon?". "Real soon" could
>> arguably mean a minute or an hour, but I don't think it makes sense
>> for it to be a week. If we're freezing data now that has a good chance
>> of being unfrozen again within 7 days, we should just freeze it
>> anyway. The cost of freezing isn't really all that high. If we keep
>> freezing pages that are going to be unfrozen again within seconds or
>> minutes, we pay those freezing costs enough times that they become
>> material, but I have difficulty imagining that it ever matters if we
>> re-freeze the same page every week. It's OK to be wrong as long as we
>> aren't wrong too often, and I think that being wrong once per page per
>> week isn't too often.
>>
>> But I think Melanie was hoping to create something more general, which
>> on one level is understandable, but on the other hand it's unclear
>> what the goals are exactly. If we limit our scope to specifically
>> VACUUM, we can make reasonable guesses about how much precision we
>> need and for how long. But a hypothetical other client of this
>> infrastructure could need anything at all, which makes it very unclear
>> what the best design is, IMHO.
> 
> I'm fine with creating something that is optimized for use with
> freezing. I proposed this LSNTimeStream patch as a separate project
> because 1) Andres suggested it would be useful for other things 2) it
> would make the adaptive freezing project smaller if this goes in
> first. The adaptive freezing has two different fuzzy bits (this
> LSNTimeStream and then the accumulator which is used to determine if a
> page is older than most  pages which were unfrozen too soon). I was
> hoping to find an independent use for one of the fuzzy bits to move it
> forward.
> 
> But, I do think we should optimize the data thinning strategy for
> vacuum's adaptive freezing.
> 

+1 to this

IMHO if Andres thinks this would be useful for something else, it'd be
nice if he could explain what the other use cases are. Otherwise it's
not clear how to make it work for them.

The one other use case I can think of is monitoring - being able to look
at WAL throughput over time. Which seems OK, but it also can accept very
low resolution in distant past.

FWIW it still makes sense to do this as a separate patch, before the
main "freezing" one.


regards

-- 
Tomas Vondra



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:

On 8/9/24 03:02, Melanie Plageman wrote:
> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
>>
>> On 8/7/24 21:39, Melanie Plageman wrote:
>>> On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <robertmhaas@gmail.com> wrote:
>>>>
>>>> As I mentioned to you off-list, I feel like this needs some sort of
>>>> recency bias. Certainly vacuum, and really almost any conceivable user
>>>> of this facility, is going to care more about accurate answers for new
>>>> data than for old data. If there's no recency bias, then I think that
>>>> eventually answers for more recent LSNs will start to become less
>>>> accurate, since they've got to share the data structure with more and
>>>> more time from long ago. I don't think you've done anything about this
>>>> in this version of the patch, but I might be wrong.
>>>
>>> That makes sense. This version of the patch set doesn't have a recency
>>> bias implementation. I plan to work on it but will need to do the
>>> testing like you mentioned.
>>>
>>
>> I agree that it's likely we probably want more accurate results for
>> recent data, so some recency bias makes sense - for example for the
>> eager vacuuming that's definitely true.
>>
>> But this was initially presented as a somewhat universal LSN/timestamp
>> mapping, and in that case it might make sense to minimize the average
>> error - which I think is what lsntime_to_drop() currently does, by
>> calculating the "area" etc.
>>
>> Maybe it'd be good to approach this from the opposite direction, say
>> what "accuracy guarantees" we want to provide, and then design the
>> structure / algorithm to ensure that. Otherwise we may end up with an
>> infinite discussion about algorithms with unclear idea which one is the
>> best choice.
>>
>> And I'm sure "users" of the LSN/Timestamp mapping may get confused about
>> what to expect, without reasonably clear guarantees.
>>
>> For example, it seems to me a "good" accuracy guarantee would be:
>>
>>    Given a LSN, the age of the returned timestamp is less than 10% off
>>    the actual timestamp. The timestamp precision is in seconds.
>>
>> This means that if LSN was written 100 seconds ago, it would be OK to
>> get an answer in the 90-110 seconds range. For LSN from 1h ago, the
>> acceptable range would be 3600s +/- 360s. And so on. The 10% is just
>> arbitrary, maybe it should be lower - doesn't matter much.
> 
> I changed this patch a bit to only provide ranges with an upper and
> lower bound from the SQL callable functions. While the size of the
> range provided could be part of our "accuracy guarantee", I'm not sure
> if we have to provide that.
> 

I wouldn't object to providing the timestamp range, along with the
estimate. That seems potentially quite useful for other use cases - it
provides a very clear guarantee.

The thing that concerns me a bit is that maybe it's an implementation
detail. I mean, we might choose to rework the structure in a way that
does not track the ranges like this ... Doesn't seem likely, though.

>> How could we do this? We have 1s precision, so we start with buckets for
>> each seconds. And we want to allow merging stuff nicely. The smallest
>> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do
>> 1s -> 10s -> 100s -> 1000s instead.
>>
>> So we start with 100x one-second buckets
>>
>> [A_0, A_1, ..., A_99]  -> 100 x 1s buckets
>> [B_0, B_1, ..., B_99]  -> 100 x 10s buckets
>> [C_0, C_1, ..., C_99]  -> 100 x 100s buckets
>> [D_0, D_1, ..., D_99]  -> 100 x 1000s buckets
>>
>> We start by adding data into A_k buckets. After filling all 100 of them,
>> we grab the oldest 10 buckets, and combine/move them into B_k. And so
>> on, until B is gets full too. Then we grab the 10 oldest B_k entries,
>> and move them into C. and so on. For D the oldest entries would get
>> discarded, or we could add another layer with each bucket representing
>> 10k seconds.
> 
> I originally had an algorithm that stored old values somewhat like
> this (each element stored 2x logical members of the preceding
> element). When I was testing algorithms, I abandoned this method
> because it was less accurate than the method which calculates the
> interpolation error "area". But, this would be expected -- it would be
> less accurate for older values.
> 
> I'm currently considering an algorithm that uses a combination of the
> interpolation error and the age of the point. I'm thinking of adding
> to or dividing the error of each point by "now - that point's time (or
> lsn)". This would lead me to be more likely to drop points that are
> older.
> 
> This is a bit different than "combining" buckets, but it seems like it
> might allow us to drop unneeded recent points when they are very
> regular.
> 

TBH I'm a bit lost in how the various patch versions merge the data.
Maybe there is a perfect algorithm, keeping a perfectly accurate
approximation in the smallest space, but does that really matter? If we
needed to keep many instances / very long history, maybe it's matter.

But we need one instance, and we seem to agree it's enough to have a
couple days of history at most. And even the super wasteful struct I
described above would only need ~8kB for that.

I suggest we do the simplest and most obvious algorithm possible, at
least for now. Focusing on this part seems like a distraction from the
freezing thing you actually want to do.

>> Isn't this a sign this does not quite fit into pgstats? Even if this
>> happens to deal with unsafe restarts, replica promotions and so on, what
>> if the user just does pg_stat_reset? That already causes trouble because
>> we simply forget deleted/updated/inserted tuples. If we also forget data
>> used for freezing heuristics, that does not seem great ...
>>
>> Wouldn't it be better to write this into WAL as part of a checkpoint (or
>> something like that?), and make bgwriter to not only add LSN/timestamp
>> into the stream, but also write it into WAL. It's already waking up, on
>> idle systems ~32B written to WAL does not matter, and on busy system
>> it's just noise.
> 
> I was imagining adding a new type of WAL record that contains just the
> LSN and time and writing it out in bgwriter. Is that not what you are
> thinking?
> 

Now sure, I was thinking we would do two things:

1) bgwriter writes the (LSN,timestamp) into WAL, and also updates the
   in-memory struct

2) during checkpoint we flush the in-memory struct to disk, so that we
   have it after restart / crash

I haven't thought about this very much, but I think this would address
both the crash/recovery/restart on the primary, and on replicas.


regards

-- 
Tomas Vondra



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Thu, Aug 8, 2024 at 9:02 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:
>
> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
> >
> > Maybe it'd be good to approach this from the opposite direction, say
> > what "accuracy guarantees" we want to provide, and then design the
> > structure / algorithm to ensure that. Otherwise we may end up with an
> > infinite discussion about algorithms with unclear idea which one is the
> > best choice.
> >
> > And I'm sure "users" of the LSN/Timestamp mapping may get confused about
> > what to expect, without reasonably clear guarantees.
> >
> > For example, it seems to me a "good" accuracy guarantee would be:
> >
> >    Given a LSN, the age of the returned timestamp is less than 10% off
> >    the actual timestamp. The timestamp precision is in seconds.
> >
> > This means that if LSN was written 100 seconds ago, it would be OK to
> > get an answer in the 90-110 seconds range. For LSN from 1h ago, the
> > acceptable range would be 3600s +/- 360s. And so on. The 10% is just
> > arbitrary, maybe it should be lower - doesn't matter much.
>
> I changed this patch a bit to only provide ranges with an upper and
> lower bound from the SQL callable functions. While the size of the
> range provided could be part of our "accuracy guarantee", I'm not sure
> if we have to provide that.

Okay, so as I think about evaluating a few new algorithms, I realize
that we do need some sort of criteria. I started listing out what I
feel is "reasonable" accuracy and plotting it to see if the
relationship is linear/exponential/etc. I think it would help to get
input on what would be "reasonable" accuracy.

I thought that the following might be acceptable:
The first column is how old the value I am looking for actually is,
the second column is how off I am willing to have the algorithm tell
me it is (+/-):

1 second, 1 minute
1 minute, 10 minute
1 hour, 1 hour
1 day, 6 hours
1 week, 12 hours
1 month, 1 day
6 months, 1 week

Column 1 over column 2 produces a line like in the attached pic. I'd
be interested in others' opinions of error tolerance.

- Melanie

Attachment

Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <tomas@vondra.me> wrote:
>
>
>
> On 8/9/24 03:02, Melanie Plageman wrote:
> > On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <tomas@vondra.me> wrote:
> >> each seconds. And we want to allow merging stuff nicely. The smallest
> >> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do
> >> 1s -> 10s -> 100s -> 1000s instead.
> >>
> >> So we start with 100x one-second buckets
> >>
> >> [A_0, A_1, ..., A_99]  -> 100 x 1s buckets
> >> [B_0, B_1, ..., B_99]  -> 100 x 10s buckets
> >> [C_0, C_1, ..., C_99]  -> 100 x 100s buckets
> >> [D_0, D_1, ..., D_99]  -> 100 x 1000s buckets
> >>
> >> We start by adding data into A_k buckets. After filling all 100 of them,
> >> we grab the oldest 10 buckets, and combine/move them into B_k. And so
> >> on, until B is gets full too. Then we grab the 10 oldest B_k entries,
> >> and move them into C. and so on. For D the oldest entries would get
> >> discarded, or we could add another layer with each bucket representing
> >> 10k seconds.
> >
> > I originally had an algorithm that stored old values somewhat like
> > this (each element stored 2x logical members of the preceding
> > element). When I was testing algorithms, I abandoned this method
> > because it was less accurate than the method which calculates the
> > interpolation error "area". But, this would be expected -- it would be
> > less accurate for older values.
> >
> > I'm currently considering an algorithm that uses a combination of the
> > interpolation error and the age of the point. I'm thinking of adding
> > to or dividing the error of each point by "now - that point's time (or
> > lsn)". This would lead me to be more likely to drop points that are
> > older.
> >
> > This is a bit different than "combining" buckets, but it seems like it
> > might allow us to drop unneeded recent points when they are very
> > regular.
> >
>
> TBH I'm a bit lost in how the various patch versions merge the data.
> Maybe there is a perfect algorithm, keeping a perfectly accurate
> approximation in the smallest space, but does that really matter? If we
> needed to keep many instances / very long history, maybe it's matter.
>
> But we need one instance, and we seem to agree it's enough to have a
> couple days of history at most. And even the super wasteful struct I
> described above would only need ~8kB for that.
>
> I suggest we do the simplest and most obvious algorithm possible, at
> least for now. Focusing on this part seems like a distraction from the
> freezing thing you actually want to do.

The simplest thing to do would be to pick an arbitrary point in the
past (say one week) and then throw out all the points (except the very
oldest to avoid extrapolation) from before that cliff. I would like to
spend time on getting a new version of the freezing patch on the list,
but I think Robert had strong feelings about having a complete design
first. I'll switch focus to that for a bit so that perhaps you all can
see how I am using the time -> LSN conversion and that could inform
the design of the data structure.

- Melanie



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Fri, Aug 9, 2024 at 9:15 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:
>
> On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <tomas@vondra.me> wrote:
> >
> > I suggest we do the simplest and most obvious algorithm possible, at
> > least for now. Focusing on this part seems like a distraction from the
> > freezing thing you actually want to do.
>
> The simplest thing to do would be to pick an arbitrary point in the
> past (say one week) and then throw out all the points (except the very
> oldest to avoid extrapolation) from before that cliff. I would like to
> spend time on getting a new version of the freezing patch on the list,
> but I think Robert had strong feelings about having a complete design
> first. I'll switch focus to that for a bit so that perhaps you all can
> see how I am using the time -> LSN conversion and that could inform
> the design of the data structure.

I realize this thought didn't make much sense since it is a fixed size
data structure. We would have to use some other algorithm to get rid
of data if there are still too many points from within the last week.

In the adaptive freezing code, I use the time stream to answer a yes
or no question. I translate a time in the past (now -
target_freeze_duration) to an LSN so that I can determine if a page
that is being modified for the first time after having been frozen has
been modified sooner than target_freeze_duration (a GUC value). If it
is, that page was unfrozen too soon. So, my use case is to produce a
yes or no answer. It doesn't matter very much how accurate I am if I
am wrong. I count the page as having been unfrozen too soon or I
don't. So, it seems I care about the accuracy of data from now until
now  - target_freeze_duration + margin of error a lot and data before
that not at all. While it is true that if I'm wrong about a page that
was older but near the cutoff, that might be better than being wrong
about a very recent page, it is still wrong.

- Melanie



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
On 8/9/24 17:48, Melanie Plageman wrote:
> On Fri, Aug 9, 2024 at 9:15 AM Melanie Plageman
> <melanieplageman@gmail.com> wrote:
>>
>> On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <tomas@vondra.me> wrote:
>>>
>>> I suggest we do the simplest and most obvious algorithm possible, at
>>> least for now. Focusing on this part seems like a distraction from the
>>> freezing thing you actually want to do.
>>
>> The simplest thing to do would be to pick an arbitrary point in the
>> past (say one week) and then throw out all the points (except the very
>> oldest to avoid extrapolation) from before that cliff. I would like to
>> spend time on getting a new version of the freezing patch on the list,
>> but I think Robert had strong feelings about having a complete design
>> first. I'll switch focus to that for a bit so that perhaps you all can
>> see how I am using the time -> LSN conversion and that could inform
>> the design of the data structure.
> 
> I realize this thought didn't make much sense since it is a fixed size
> data structure. We would have to use some other algorithm to get rid
> of data if there are still too many points from within the last week.
> 

Not sure I understand. Why would the fixed size of the struct mean we
can't discard too old data?

I'd imagine we simply reclaim some of the slots and mark them as unused,
"move" the data to make space for recent data, or something like that.
Or just use something like a cyclic buffer, that wraps around and
overwrites oldest data.

> In the adaptive freezing code, I use the time stream to answer a yes
> or no question. I translate a time in the past (now -
> target_freeze_duration) to an LSN so that I can determine if a page
> that is being modified for the first time after having been frozen has
> been modified sooner than target_freeze_duration (a GUC value). If it
> is, that page was unfrozen too soon. So, my use case is to produce a
> yes or no answer. It doesn't matter very much how accurate I am if I
> am wrong. I count the page as having been unfrozen too soon or I
> don't. So, it seems I care about the accuracy of data from now until
> now  - target_freeze_duration + margin of error a lot and data before
> that not at all. While it is true that if I'm wrong about a page that
> was older but near the cutoff, that might be better than being wrong
> about a very recent page, it is still wrong.
> 

Yeah. But isn't that a bit backwards? The decision can be wrong because
the estimate was too off, or maybe it was spot on and we still made a
wrong decision. That's what happens with heuristics.

I think a natural expectation is that the quality of the answers
correlates with the accuracy of the data / estimates. With accurate
results (say we keep a perfect history, with no loss of precision for
older data) we should be doing the right decision most of the time. If
not, it's a lost cause, IMHO. And with lower accuracy it'd get worse,
otherwise why would we need the detailed data.

But now that I think about it, I'm not entirely sure I understand what
point are you making :-(


regards

-- 
Tomas Vondra



Re: Add LSN <-> time conversion functionality

From
Tomas Vondra
Date:
On 8/9/24 15:09, Melanie Plageman wrote:
>
> ...
> 
> Okay, so as I think about evaluating a few new algorithms, I realize
> that we do need some sort of criteria. I started listing out what I
> feel is "reasonable" accuracy and plotting it to see if the
> relationship is linear/exponential/etc. I think it would help to get
> input on what would be "reasonable" accuracy.
> 
> I thought that the following might be acceptable:
> The first column is how old the value I am looking for actually is,
> the second column is how off I am willing to have the algorithm tell
> me it is (+/-):
> 
> 1 second, 1 minute
> 1 minute, 10 minute
> 1 hour, 1 hour
> 1 day, 6 hours
> 1 week, 12 hours
> 1 month, 1 day
> 6 months, 1 week
> 

I think the question is whether we want to make this useful for other
places and/or people, or if it's fine to tailor this specifically for
the freezing patch.

If the latter (specific to the freezing patch), I don't see why would it
matter what we think - either it works for the patch, or not.

But if we want to make it more widely useful, I find it a bit strange
the relative accuracy *increases* for older data. I mean, we start with
relative error 6000% (60s/1s) and then we get to relative error ~4%
(1w/24w). Isn't that a bit against the earlier discussion on needing
better accuracy for recent data? Sure, the absolute accuracy is still
better (1m <<< 1w). And if this is good enough for the freezing ...


> Column 1 over column 2 produces a line like in the attached pic. I'd
> be interested in others' opinions of error tolerance.
> 
> - Melanie

I don't understand what the axes on the chart are :-( Does "A over B"
mean A is x-axis or y-axis?

-- 
Tomas Vondra



Re: Add LSN <-> time conversion functionality

From
Melanie Plageman
Date:
On Fri, Aug 9, 2024 at 1:03 PM Tomas Vondra <tomas@vondra.me> wrote:
>
> On 8/9/24 17:48, Melanie Plageman wrote:
> > On Fri, Aug 9, 2024 at 9:15 AM Melanie Plageman
> > <melanieplageman@gmail.com> wrote:
> >>
> >> On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <tomas@vondra.me> wrote:
> >>>
> >>> I suggest we do the simplest and most obvious algorithm possible, at
> >>> least for now. Focusing on this part seems like a distraction from the
> >>> freezing thing you actually want to do.
> >>
> >> The simplest thing to do would be to pick an arbitrary point in the
> >> past (say one week) and then throw out all the points (except the very
> >> oldest to avoid extrapolation) from before that cliff. I would like to
> >> spend time on getting a new version of the freezing patch on the list,
> >> but I think Robert had strong feelings about having a complete design
> >> first. I'll switch focus to that for a bit so that perhaps you all can
> >> see how I am using the time -> LSN conversion and that could inform
> >> the design of the data structure.
> >
> > I realize this thought didn't make much sense since it is a fixed size
> > data structure. We would have to use some other algorithm to get rid
> > of data if there are still too many points from within the last week.
> >
>
> Not sure I understand. Why would the fixed size of the struct mean we
> can't discard too old data?

Oh, we can discard old data. I was just saying that all of the data
might be newer than the cutoff, in which case we can't only discard
old data if we want to make room for new data.

> > In the adaptive freezing code, I use the time stream to answer a yes
> > or no question. I translate a time in the past (now -
> > target_freeze_duration) to an LSN so that I can determine if a page
> > that is being modified for the first time after having been frozen has
> > been modified sooner than target_freeze_duration (a GUC value). If it
> > is, that page was unfrozen too soon. So, my use case is to produce a
> > yes or no answer. It doesn't matter very much how accurate I am if I
> > am wrong. I count the page as having been unfrozen too soon or I
> > don't. So, it seems I care about the accuracy of data from now until
> > now  - target_freeze_duration + margin of error a lot and data before
> > that not at all. While it is true that if I'm wrong about a page that
> > was older but near the cutoff, that might be better than being wrong
> > about a very recent page, it is still wrong.
> >
>
> Yeah. But isn't that a bit backwards? The decision can be wrong because
> the estimate was too off, or maybe it was spot on and we still made a
> wrong decision. That's what happens with heuristics.
>
> I think a natural expectation is that the quality of the answers
> correlates with the accuracy of the data / estimates. With accurate
> results (say we keep a perfect history, with no loss of precision for
> older data) we should be doing the right decision most of the time. If
> not, it's a lost cause, IMHO. And with lower accuracy it'd get worse,
> otherwise why would we need the detailed data.
>
> But now that I think about it, I'm not entirely sure I understand what
> point are you making :-(

My only point was that we really don't need to produce *any* estimate
for a value from before the cutoff. We just need to estimate if it is
before or after. So, while we need to keep enough data to get that
answer right, we don't need very old data at all. Which is different
from how I was thinking about the LSNTimeStream feature before.

On Fri, Aug 9, 2024 at 1:24 PM Tomas Vondra <tomas@vondra.me> wrote:
>
> On 8/9/24 15:09, Melanie Plageman wrote:
> >
> > Okay, so as I think about evaluating a few new algorithms, I realize
> > that we do need some sort of criteria. I started listing out what I
> > feel is "reasonable" accuracy and plotting it to see if the
> > relationship is linear/exponential/etc. I think it would help to get
> > input on what would be "reasonable" accuracy.
> >
> > I thought that the following might be acceptable:
> > The first column is how old the value I am looking for actually is,
> > the second column is how off I am willing to have the algorithm tell
> > me it is (+/-):
> >
> > 1 second, 1 minute
> > 1 minute, 10 minute
> > 1 hour, 1 hour
> > 1 day, 6 hours
> > 1 week, 12 hours
> > 1 month, 1 day
> > 6 months, 1 week
> >
>
> I think the question is whether we want to make this useful for other
> places and/or people, or if it's fine to tailor this specifically for
> the freezing patch.
>
> If the latter (specific to the freezing patch), I don't see why would it
> matter what we think - either it works for the patch, or not.

I think the best way forward is to make it useful for the freezing
patch and then, if it seems like exposing it makes sense, we can do
that and properly document what to expect.

> But if we want to make it more widely useful, I find it a bit strange
> the relative accuracy *increases* for older data. I mean, we start with
> relative error 6000% (60s/1s) and then we get to relative error ~4%
> (1w/24w). Isn't that a bit against the earlier discussion on needing
> better accuracy for recent data? Sure, the absolute accuracy is still
> better (1m <<< 1w). And if this is good enough for the freezing ...

I was just writing out what seemed intuitively like something I would
be willing to tolerate as a user. But you are right that doesn't make
sense -- for the accuracy to go up for older data.  I just think being
months off for any estimate seems bad no matter how old the data is --
which is probably why I felt like 1 week of accuracy for data 6 months
old seemed like a reasonable tolerance. But, perhaps that isn't
useful.

Also, I realized that my "calculate the error area" method strongly
favors keeping older data. Once you drop a point, the area between the
two remaining points (on either side of it) will be larger because the
distance between them is greater with the dropped point. So there is a
strong bias toward dropping newer data. That seems bad.

> > Column 1 over column 2 produces a line like in the attached pic. I'd
> > be interested in others' opinions of error tolerance.
>
> I don't understand what the axes on the chart are :-( Does "A over B"
> mean A is x-axis or y-axis?

Yea, that was confusing. A over B is the y axis so you could see the
ratios and x is just their position in the array (so meaningless).
Attached is A on the x axis and B on the y axis.

- Melanie

Attachment

Re: Add LSN <-> time conversion functionality

From
Robert Haas
Date:
On Fri, Aug 9, 2024 at 11:48 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:
> In the adaptive freezing code, I use the time stream to answer a yes
> or no question. I translate a time in the past (now -
> target_freeze_duration) to an LSN so that I can determine if a page
> that is being modified for the first time after having been frozen has
> been modified sooner than target_freeze_duration (a GUC value). If it
> is, that page was unfrozen too soon. So, my use case is to produce a
> yes or no answer. It doesn't matter very much how accurate I am if I
> am wrong. I count the page as having been unfrozen too soon or I
> don't. So, it seems I care about the accuracy of data from now until
> now  - target_freeze_duration + margin of error a lot and data before
> that not at all. While it is true that if I'm wrong about a page that
> was older but near the cutoff, that might be better than being wrong
> about a very recent page, it is still wrong.

I don't really think this is the right way to think about it.

First, you'd really like target_freeze_duration to be something that
can be changed at runtime, but the data structure that you use for the
LSN-time mapping has to be sized at startup time and can't change
thereafter. So I think you should try to design the LSN-time mapping
structure so that it is fixed size -- i.e. independent of the value of
target_freeze_duration -- but capable of producing sufficiently
correct answers for all reasonable values of target_freeze_duration.
Then the user can change the value to whatever they like without a
restart, and still get reasonable behavior. Meanwhile, you don't have
to deal with a variable-size data structure. Woohoo!

Second, I guess I'm a bit confused about the statement that "It
doesn't matter very much how accurate I am if I am wrong." What does
that really mean? We're going to look at the LSN of a page that we're
thinking about freezing and use that to estimate the time since the
page was last modified and use that to guess whether the page is
likely to be modified again soon and then use that to decide whether
to freeze. Even if we always estimated the time since last
modification perfectly, we could still be wrong about what that means
for the future. And we won't estimate the last modification time
perfectly in all cases, because even if we make perfect decisions
about which data points to throw away, we're still going to use linear
interpolation in between those points, and that can be wrong. And I
think it's pretty much impossible to make flawless decisions about
which points to throw away, too.

But the point is that we just need to be close enough. If
target_freeze_duration=10m and our page age estimates are off by an
average of 10s, we will still make the correct decision about whether
to freeze most of the time, but if they are off by an average of 1m,
we'll be wrong more often, and if they're off by an average of 10m,
we'll be wrong way more often. When target_freeze_duration=2h, it's
not nearly so bad to be off by 10m. The probability that a page will
be modified again soon when it hasn't been modified in the last 1h54m
is probably not that different from the probability when it hasn't
been modified in 2h4m, but the probability of a page being modified
again soon when it hasn't been modified in the last 4m could well be
quite different from when it hasn't been modified in the last 14m. So
it's completely reasonable, IMHO, to set things up so that you have
higher accuracy for newer LSNs.

I feel like you're making this a lot harder than it needs to be.
Actually, I think this is a hard problem in terms of where to actually
store the data -- as Tomas said, pgstat doesn't seem quite right, and
it's not clear to me what is actually right. But in terms of actually
what to do with the data structure, some kind of exponential thinning
of the data seems like the obvious thing to do. Tomas suggested a
version of that and I suggested a version of that and you could pick
either one or do something of your own, but I don't really feel like
we need or want an original algorithm here. It seems better to just do
stuff we know works, and whose characteristics we can easily predict.
The only area where I feel like we might want some algorithmic
innovation is in terms of eliding redundant measurements when things
aren't really changing.

But even that seems pretty optional. If you don't do that, and the
system sits there idle for a long time, you will have a needlessly
inaccurate idea of how old the pages are compared to what you could
have had. But also, they will all still look old so you'll still
freeze them so you win. The end.

--
Robert Haas
EDB: http://www.enterprisedb.com