Thread: Fix premature xmin advancement during fast forward decoding

Fix premature xmin advancement during fast forward decoding

From
"Zhijie Hou (Fujitsu)"
Date:
Hi,

When analyzing some issues in another thread[1], I found a bug that fast forward
decoding could lead to premature advancement of catalog_xmin, resulting in
required catalog data being removed by vacuum.

The issue arises because we do not build a base snapshot when decoding changes
during fast forward mode, preventing reference to the minimum transaction ID
that remains visible in the snapshot when determining the candidate for
catalog_xmin. As a result, catalog_xmin was directly advanced to the oldest
running transaction ID found in the latest running_xacts record.

In code-level, SnapBuildProcessChange -> ReorderBufferSetBaseSnapshot could not
be reached during fast forward decoding, resulting in
rb->txns_by_base_snapshot_lsn being NULL. When advancing catalog_xmin, the
system attempts to refer to rb->txns_by_base_snapshot_lsn via
ReorderBufferGetOldestXmin(). However, since rb->txns_by_base_snapshot_lsn is
NULL, it defaults to directly using running->oldestRunningXid as the candidate
for catalog_xmin. For reference, see the details in
SnapBuildProcessRunningXacts.

See the attachment for a test(0002) to prove that the catalog data that are
still required would be removed after premature catalog_xmin advancement during
fast forward decoding.

To fix this, I think we can allow the base snapshot to be built during fast
forward decoding, as implemented in the patch 0001 (We already built base
snapshot in fast-forward mode for logical message in logicalmsg_decode()).

I also explored the possibility of further optimizing the fix to reduce the
overhead associated with building a snapshot in fast-forward mode. E.g., to
maintain a single base_snapshot_xmin rather than generating a full snapshot for
each transaction, and use this base_snapshot_xmin when determining the
candidate catalog_xmin. However, I am not feeling confident about maintaining
some fast-forward dedicated fields in common structures and
perhaps employing different handling for catalog_xmin.

Moreover, I conducted a basic test[2] to test the patch's impact, noting that
advancing the slot incurs roughly a 4% increase in processing time after
applying the patch, which appears to be acceptable. Additionally, the cost
associated with building the snapshot via SnapBuildBuildSnapshot() did not show
up in the profile. Therefore, I think it's reasonable to refrain from
further optimization at this stage.

[1]
https://www.postgresql.org/message-id/OS3PR01MB5718ED3F0123737C7380BBAD94BB2%40OS3PR01MB5718.jpnprd01.prod.outlook.com

[2]
Create a table test(a int);
Create a slot A.
pgbench postgres -T 60 -j 100 -c 100 -f simple-insert

    simple-insert:
    BEGIN;
    INSERT INTO test VALUES(1);
    COMMIT;

use pg_replication_slot_advance to advance the slot A to the latest position
and count the time.

HEAD: Time: 4189.257 ms
PATCH: Time: 4371.358 ms

Best Regards,
Hou zj


Attachment

Re: Fix premature xmin advancement during fast forward decoding

From
shveta malik
Date:
On Tue, Apr 22, 2025 at 12:36 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:
>
> Hi,
>
> To fix this, I think we can allow the base snapshot to be built during fast
> forward decoding, as implemented in the patch 0001 (We already built base
> snapshot in fast-forward mode for logical message in logicalmsg_decode()).

The idea and code looks okay to me and the performance impact is also
not that huge.

IIUC, fast_forward decoding mode is used only in two cases. 1)
pg_replication_slot_advance and 2) in upgrade flow to check if there
are any pending WAL changes which are not yet replicated. See
'binary_upgrade_logical_slot_has_caught_up'-->'LogicalReplicationSlotHasPendingWal'.
It seems like this change will not have any negative impact in the
upgrade flow as well (in terms of performance and anything else).
Thoughts?

>
> Moreover, I conducted a basic test[2] to test the patch's impact, noting that
> advancing the slot incurs roughly a 4% increase in processing time after
> applying the patch, which appears to be acceptable. Additionally, the cost
> associated with building the snapshot via SnapBuildBuildSnapshot() did not show
> up in the profile. Therefore, I think it's reasonable to refrain from
> further optimization at this stage.

I agree on this.

thanks
Shveta



RE: Fix premature xmin advancement during fast forward decoding

From
"Zhijie Hou (Fujitsu)"
Date:
On Wed, Apr 23, 2025 at 2:31 PM shveta malik wrote:
> 
> On Tue, Apr 22, 2025 at 12:36 PM Zhijie Hou (Fujitsu)
> <houzj.fnst@fujitsu.com> wrote:
> >
> > Hi,
> >
> > To fix this, I think we can allow the base snapshot to be built during fast
> > forward decoding, as implemented in the patch 0001 (We already built base
> > snapshot in fast-forward mode for logical message in logicalmsg_decode()).
> 
> The idea and code looks okay to me and the performance impact is also
> not that huge.

Thanks for reviewing!

> 
> IIUC, fast_forward decoding mode is used only in two cases. 1)
> pg_replication_slot_advance and 2) in upgrade flow to check if there
> are any pending WAL changes which are not yet replicated. See
> 'binary_upgrade_logical_slot_has_caught_up'-->'LogicalReplicationSlotHasP
> endingWal'.
> It seems like this change will not have any negative impact in the
> upgrade flow as well (in terms of performance and anything else).
> Thoughts?

I also think so. The upgrade uses fast forward decoding to check if there are
any pending WALs not sent yet and report an ERROR if so. This indicates that
we do not expect to decode real changes for the upgrade slots, so the
performance impact would be minimal in normal cases.

Best Regards,
Hou zj

Re: Fix premature xmin advancement during fast forward decoding

From
Amit Kapila
Date:
On Tue, Apr 22, 2025 at 12:36 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:
>
> When analyzing some issues in another thread[1], I found a bug that fast forward
> decoding could lead to premature advancement of catalog_xmin, resulting in
> required catalog data being removed by vacuum.
>
> The issue arises because we do not build a base snapshot when decoding changes
> during fast forward mode, preventing reference to the minimum transaction ID
> that remains visible in the snapshot when determining the candidate for
> catalog_xmin. As a result, catalog_xmin was directly advanced to the oldest
> running transaction ID found in the latest running_xacts record.
>
> In code-level, SnapBuildProcessChange -> ReorderBufferSetBaseSnapshot could not
> be reached during fast forward decoding, resulting in
> rb->txns_by_base_snapshot_lsn being NULL. When advancing catalog_xmin, the
> system attempts to refer to rb->txns_by_base_snapshot_lsn via
> ReorderBufferGetOldestXmin(). However, since rb->txns_by_base_snapshot_lsn is
> NULL, it defaults to directly using running->oldestRunningXid as the candidate
> for catalog_xmin. For reference, see the details in
> SnapBuildProcessRunningXacts.
>
> See the attachment for a test(0002) to prove that the catalog data that are
> still required would be removed after premature catalog_xmin advancement during
> fast forward decoding.
>
> To fix this, I think we can allow the base snapshot to be built during fast
> forward decoding, as implemented in the patch 0001 (We already built base
> snapshot in fast-forward mode for logical message in logicalmsg_decode()).
>

The same bug was fixed for non-fast_forward mode in commit f49a80c4.
See the following code in that commit:

-       LogicalIncreaseXminForSlot(lsn, running->oldestRunningXid);
+       xmin = ReorderBufferGetOldestXmin(builder->reorder);
+       if (xmin == InvalidTransactionId)
+               xmin = running->oldestRunningXid;
....
+       LogicalIncreaseXminForSlot(lsn, xmin);

Is my understanding correct? If so, then I think it is a miss in the
commit f49a80c4 to consider fast_forward mode. Please refer commit in
the commit message.

The code changes look correct to me. I have some suggestions related
to comments in the code. See attached. Please prepare patches for back
branches as well.

--
With Regards,
Amit Kapila.

Attachment

Re: Fix premature xmin advancement during fast forward decoding

From
Masahiko Sawada
Date:
Hi,

On Tue, Apr 22, 2025 at 12:06 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:
>
> Hi,
>
> When analyzing some issues in another thread[1], I found a bug that fast forward
> decoding could lead to premature advancement of catalog_xmin, resulting in
> required catalog data being removed by vacuum.
>
> The issue arises because we do not build a base snapshot when decoding changes
> during fast forward mode, preventing reference to the minimum transaction ID
> that remains visible in the snapshot when determining the candidate for
> catalog_xmin. As a result, catalog_xmin was directly advanced to the oldest
> running transaction ID found in the latest running_xacts record.
>
> In code-level, SnapBuildProcessChange -> ReorderBufferSetBaseSnapshot could not
> be reached during fast forward decoding, resulting in
> rb->txns_by_base_snapshot_lsn being NULL. When advancing catalog_xmin, the
> system attempts to refer to rb->txns_by_base_snapshot_lsn via
> ReorderBufferGetOldestXmin(). However, since rb->txns_by_base_snapshot_lsn is
> NULL, it defaults to directly using running->oldestRunningXid as the candidate
> for catalog_xmin. For reference, see the details in
> SnapBuildProcessRunningXacts.

I agree with your analysis.

> To fix this, I think we can allow the base snapshot to be built during fast
> forward decoding, as implemented in the patch 0001 (We already built base
> snapshot in fast-forward mode for logical message in logicalmsg_decode()).
>
> I also explored the possibility of further optimizing the fix to reduce the
> overhead associated with building a snapshot in fast-forward mode. E.g., to
> maintain a single base_snapshot_xmin rather than generating a full snapshot for
> each transaction, and use this base_snapshot_xmin when determining the
> candidate catalog_xmin. However, I am not feeling confident about maintaining
> some fast-forward dedicated fields in common structures and
> perhaps employing different handling for catalog_xmin.
>
> Moreover, I conducted a basic test[2] to test the patch's impact, noting that
> advancing the slot incurs roughly a 4% increase in processing time after
> applying the patch, which appears to be acceptable.  Additionally, the cost
> associated with building the snapshot via SnapBuildBuildSnapshot() did not show
> up in the profile.

Where did the regression come from? I think that even with your patch
SnapBuildBuildSnapshot() would be called only a few times in the
workload. With the patch, I think that we need to create
ReorderBufferTXN entries for each transaction even during fast_forward
mode, which could lead to overheads. I think that 4% degradation is
something that we want to avoid.

Regards

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



RE: Fix premature xmin advancement during fast forward decoding

From
"Zhijie Hou (Fujitsu)"
Date:
On Fri, Apr 25, 2025 at 5:44 AM Masahiko Sawada wrote:

(Resending this email after compressing the attachment)

> On Tue, Apr 22, 2025 at 12:06 AM Zhijie Hou (Fujitsu)
> <houzj.fnst@fujitsu.com> wrote:
> >
> > Hi,
> >
> > When analyzing some issues in another thread[1], I found a bug that
> > fast forward decoding could lead to premature advancement of
> > catalog_xmin, resulting in required catalog data being removed by vacuum.
> >
> > The issue arises because we do not build a base snapshot when decoding
> > changes during fast forward mode, preventing reference to the minimum
> > transaction ID that remains visible in the snapshot when determining
> > the candidate for catalog_xmin. As a result, catalog_xmin was directly
> > advanced to the oldest running transaction ID found in the latest
> running_xacts record.
> >
> > In code-level, SnapBuildProcessChange -> ReorderBufferSetBaseSnapshot
> > could not be reached during fast forward decoding, resulting in
> > rb->txns_by_base_snapshot_lsn being NULL. When advancing
> catalog_xmin,
> > rb->the
> > system attempts to refer to rb->txns_by_base_snapshot_lsn via
> > ReorderBufferGetOldestXmin(). However, since
> > rb->txns_by_base_snapshot_lsn is NULL, it defaults to directly using
> > running->oldestRunningXid as the candidate for catalog_xmin. For
> > reference, see the details in SnapBuildProcessRunningXacts.
> 
> I agree with your analysis.
> 
> > To fix this, I think we can allow the base snapshot to be built during
> > fast forward decoding, as implemented in the patch 0001 (We already
> > built base snapshot in fast-forward mode for logical message in
> logicalmsg_decode()).
> >
> > I also explored the possibility of further optimizing the fix to
> > reduce the overhead associated with building a snapshot in
> > fast-forward mode. E.g., to maintain a single base_snapshot_xmin
> > rather than generating a full snapshot for each transaction, and use
> > this base_snapshot_xmin when determining the candidate catalog_xmin.
> > However, I am not feeling confident about maintaining some
> > fast-forward dedicated fields in common structures and perhaps employing
> different handling for catalog_xmin.
> >
> > Moreover, I conducted a basic test[2] to test the patch's impact,
> > noting that advancing the slot incurs roughly a 4% increase in
> > processing time after applying the patch, which appears to be
> > acceptable.  Additionally, the cost associated with building the
> > snapshot via SnapBuildBuildSnapshot() did not show up in the profile.
> 
> Where did the regression come from? I think that even with your patch
> SnapBuildBuildSnapshot() would be called only a few times in the workload.

AFAICS, the primary cost arises from the additional function
invocations/executions. Functions such as SnapBuildProcessChange,
ReorderBufferSetBaseSnapshot, and ReorderBufferXidHasBaseSnapshot are invoked
more frequently after the patch. Although these functions aren't inherently
expensive, the scenario I tested involved numerous transactions starting with
just a single insert each. This resulted in a high frequency of calls to these
functions.

Attaching the profile for both HEAD and PATCHED for reference.

> With the patch, I think that we need to create ReorderBufferTXN entries for
> each transaction even during fast_forward mode, which could lead to
> overheads.

I think ReorderBufferTXN was created in fast_forward even without the patch.
See heap_decode-> ReorderBufferProcessXid.

> I think that 4% degradation is something that we want to avoid.

As mentioned above, these costs arise from essential function executions
required to maintain txns_by_base_snapshot_lsn. So I think the cost is
reasonable to ensure correctness. Thoughts ?

Best Regards,
Hou zj

Attachment

Re: Fix premature xmin advancement during fast forward decoding

From
Amit Kapila
Date:
On Fri, Apr 25, 2025 at 8:14 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:
>
> On Fri, Apr 25, 2025 at 5:44 AM Masahiko Sawada wrote:
> > On Tue, Apr 22, 2025 at 12:06 AM Zhijie Hou (Fujitsu)
> > <houzj.fnst@fujitsu.com> wrote:
> > >
> > > Hi,
> > >
> > > When analyzing some issues in another thread[1], I found a bug that
> > > fast forward decoding could lead to premature advancement of
> > > catalog_xmin, resulting in required catalog data being removed by vacuum.
> > >
> > > The issue arises because we do not build a base snapshot when decoding
> > > changes during fast forward mode, preventing reference to the minimum
> > > transaction ID that remains visible in the snapshot when determining
> > > the candidate for catalog_xmin. As a result, catalog_xmin was directly
> > > advanced to the oldest running transaction ID found in the latest
> > running_xacts record.
> > >
> > > In code-level, SnapBuildProcessChange -> ReorderBufferSetBaseSnapshot
> > > could not be reached during fast forward decoding, resulting in
> > > rb->txns_by_base_snapshot_lsn being NULL. When advancing
> > catalog_xmin,
> > > rb->the
> > > system attempts to refer to rb->txns_by_base_snapshot_lsn via
> > > ReorderBufferGetOldestXmin(). However, since
> > > rb->txns_by_base_snapshot_lsn is NULL, it defaults to directly using
> > > running->oldestRunningXid as the candidate for catalog_xmin. For
> > > reference, see the details in SnapBuildProcessRunningXacts.
> >
> > I agree with your analysis.
> >
> > > To fix this, I think we can allow the base snapshot to be built during
> > > fast forward decoding, as implemented in the patch 0001 (We already
> > > built base snapshot in fast-forward mode for logical message in
> > logicalmsg_decode()).
> > >
> > > I also explored the possibility of further optimizing the fix to
> > > reduce the overhead associated with building a snapshot in
> > > fast-forward mode. E.g., to maintain a single base_snapshot_xmin
> > > rather than generating a full snapshot for each transaction, and use
> > > this base_snapshot_xmin when determining the candidate catalog_xmin.
> > > However, I am not feeling confident about maintaining some
> > > fast-forward dedicated fields in common structures and perhaps employing
> > different handling for catalog_xmin.
> > >
> > > Moreover, I conducted a basic test[2] to test the patch's impact,
> > > noting that advancing the slot incurs roughly a 4% increase in
> > > processing time after applying the patch, which appears to be
> > > acceptable.  Additionally, the cost associated with building the
> > > snapshot via SnapBuildBuildSnapshot() did not show up in the profile.
> >
> > Where did the regression come from? I think that even with your patch
> > SnapBuildBuildSnapshot() would be called only a few times in the workload.
>
> AFAICS, the primary cost arises from the additional function
> invocations/executions. Functions such as SnapBuildProcessChange,
> ReorderBufferSetBaseSnapshot, and ReorderBufferXidHasBaseSnapshot are invoked
> more frequently after the patch. Although these functions aren't inherently
> expensive, the scenario I tested involved numerous transactions starting with
> just a single insert each. This resulted in a high frequency of calls to these
> functions.
>

Yeah, I see below from the patched profile.

|--4.65%--SnapBuildProcessChange
                                                |          |
|          |
                                                |          |
|          |--2.08%--ReorderBufferSetBaseSnapshot
                                                |          |
|          |          |
                                                |          |
|          |           --1.32%--__mcount_internal
                                                |          |
|          |
                                                |          |
|          |--1.29%--__mcount_internal
                                                |          |
|          |
                                                |          |
|           --0.72%--ReorderBufferXidHasBaseSnapshot

As the number of operations per transaction increases, this cost
should reduce further as we will set base_snapshot only once per
transaction. Now, we can try to micro-optimize this by passing
ReorderBufferTXN, as that is computed before we invoke
SnapBuildProcessChange, but not sure how much it will change. However,
the bigger question is, is it really worth optimizing this code path?

If we really want to optimize this code path for the test Hou-San did,
I see that SnapBuildCommitTxn() has a bigger overhead, so we should
investigate whether we really need it for the fast-forward mode, where
we won't process changes. I am of the opinion that we need to pay this
additional cost of setting base_snapshot for the correctness of the
fast-forward mode.

--
With Regards,
Amit Kapila.



Re: Fix premature xmin advancement during fast forward decoding

From
Masahiko Sawada
Date:
On Thu, Apr 24, 2025 at 9:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Fri, Apr 25, 2025 at 8:14 AM Zhijie Hou (Fujitsu)
> <houzj.fnst@fujitsu.com> wrote:
> >
> > On Fri, Apr 25, 2025 at 5:44 AM Masahiko Sawada wrote:
> > > On Tue, Apr 22, 2025 at 12:06 AM Zhijie Hou (Fujitsu)
> > > <houzj.fnst@fujitsu.com> wrote:
> > > >
> > > > Hi,
> > > >
> > > > When analyzing some issues in another thread[1], I found a bug that
> > > > fast forward decoding could lead to premature advancement of
> > > > catalog_xmin, resulting in required catalog data being removed by vacuum.
> > > >
> > > > The issue arises because we do not build a base snapshot when decoding
> > > > changes during fast forward mode, preventing reference to the minimum
> > > > transaction ID that remains visible in the snapshot when determining
> > > > the candidate for catalog_xmin. As a result, catalog_xmin was directly
> > > > advanced to the oldest running transaction ID found in the latest
> > > running_xacts record.
> > > >
> > > > In code-level, SnapBuildProcessChange -> ReorderBufferSetBaseSnapshot
> > > > could not be reached during fast forward decoding, resulting in
> > > > rb->txns_by_base_snapshot_lsn being NULL. When advancing
> > > catalog_xmin,
> > > > rb->the
> > > > system attempts to refer to rb->txns_by_base_snapshot_lsn via
> > > > ReorderBufferGetOldestXmin(). However, since
> > > > rb->txns_by_base_snapshot_lsn is NULL, it defaults to directly using
> > > > running->oldestRunningXid as the candidate for catalog_xmin. For
> > > > reference, see the details in SnapBuildProcessRunningXacts.
> > >
> > > I agree with your analysis.
> > >
> > > > To fix this, I think we can allow the base snapshot to be built during
> > > > fast forward decoding, as implemented in the patch 0001 (We already
> > > > built base snapshot in fast-forward mode for logical message in
> > > logicalmsg_decode()).
> > > >
> > > > I also explored the possibility of further optimizing the fix to
> > > > reduce the overhead associated with building a snapshot in
> > > > fast-forward mode. E.g., to maintain a single base_snapshot_xmin
> > > > rather than generating a full snapshot for each transaction, and use
> > > > this base_snapshot_xmin when determining the candidate catalog_xmin.
> > > > However, I am not feeling confident about maintaining some
> > > > fast-forward dedicated fields in common structures and perhaps employing
> > > different handling for catalog_xmin.
> > > >
> > > > Moreover, I conducted a basic test[2] to test the patch's impact,
> > > > noting that advancing the slot incurs roughly a 4% increase in
> > > > processing time after applying the patch, which appears to be
> > > > acceptable.  Additionally, the cost associated with building the
> > > > snapshot via SnapBuildBuildSnapshot() did not show up in the profile.
> > >
> > > Where did the regression come from? I think that even with your patch
> > > SnapBuildBuildSnapshot() would be called only a few times in the workload.
> >
> > AFAICS, the primary cost arises from the additional function
> > invocations/executions. Functions such as SnapBuildProcessChange,
> > ReorderBufferSetBaseSnapshot, and ReorderBufferXidHasBaseSnapshot are invoked
> > more frequently after the patch. Although these functions aren't inherently
> > expensive, the scenario I tested involved numerous transactions starting with
> > just a single insert each. This resulted in a high frequency of calls to these
> > functions.

Ah, you're right. I looked at the wrong profile.

> >
>
> Yeah, I see below from the patched profile.
>
> |--4.65%--SnapBuildProcessChange
>                                                 |          |
> |          |
>                                                 |          |
> |          |--2.08%--ReorderBufferSetBaseSnapshot
>                                                 |          |
> |          |          |
>                                                 |          |
> |          |           --1.32%--__mcount_internal
>                                                 |          |
> |          |
>                                                 |          |
> |          |--1.29%--__mcount_internal
>                                                 |          |
> |          |
>                                                 |          |
> |           --0.72%--ReorderBufferXidHasBaseSnapshot
>

It looks like you've run the benchmark test with enabling assertions,
is that right? I can see AssertTXNLsnOrder() in the profiles.

> As the number of operations per transaction increases, this cost
> should reduce further as we will set base_snapshot only once per
> transaction. Now, we can try to micro-optimize this by passing
> ReorderBufferTXN, as that is computed before we invoke
> SnapBuildProcessChange, but not sure how much it will change. However,
> the bigger question is, is it really worth optimizing this code path?
>
> If we really want to optimize this code path for the test Hou-San did,
> I see that SnapBuildCommitTxn() has a bigger overhead, so we should
> investigate whether we really need it for the fast-forward mode, where
> we won't process changes. I am of the opinion that we need to pay this
> additional cost of setting base_snapshot for the correctness of the
> fast-forward mode.

I've done the same test in my environment and saw about 7.2% degradation:

HEAD: 773.538 ms
PATCHED: 833.906 ms

It depends on the workload and environment of course but I'm not sure
this is a reasonable cost we have to pay.

What I'm concerned about is the back branches. With this approach all
back branches will have such degradations and it doesn't make sense to
me to optimize SnapBuildCommitTxn() codes in back branches.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: Fix premature xmin advancement during fast forward decoding

From
Amit Kapila
Date:
On Fri, Apr 25, 2025 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> What I'm concerned about is the back branches. With this approach all
> back branches will have such degradations and it doesn't make sense to
> me to optimize SnapBuildCommitTxn() codes in back branches.
>

One possibility could be that instead of maintaining an entire
snapshot in fast_forward mode, we can maintain snapshot's xmin in each
ReorderBufferTXN. But then also, how would we get the minimum
txns_by_base_snapshot_lsn as we are getting now in
ReorderBufferGetOldestXmin? I think we need to traverse the entire
list of txns to get it in fast_forward mode but that may not show up
because it will not be done for each transaction. We can try such a
thing, but it won't be clean to have fast_forward specific code and
also it would be better to add such things only for HEAD. Can you
think of any better ideas?

--
With Regards,
Amit Kapila.



Re: Fix premature xmin advancement during fast forward decoding

From
Masahiko Sawada
Date:
On Fri, Apr 25, 2025 at 4:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Fri, Apr 25, 2025 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > What I'm concerned about is the back branches. With this approach all
> > back branches will have such degradations and it doesn't make sense to
> > me to optimize SnapBuildCommitTxn() codes in back branches.
> >
>
> One possibility could be that instead of maintaining an entire
> snapshot in fast_forward mode, we can maintain snapshot's xmin in each
> ReorderBufferTXN. But then also, how would we get the minimum
> txns_by_base_snapshot_lsn as we are getting now in
> ReorderBufferGetOldestXmin? I think we need to traverse the entire
> list of txns to get it in fast_forward mode but that may not show up
> because it will not be done for each transaction. We can try such a
> thing, but it won't be clean to have fast_forward specific code and
> also it would be better to add such things only for HEAD.

Agreed.

> Can you think of any better ideas?

No idea. Hmm, there seems no reasonable way to fix this issue for back
branches. I consented to the view that these costs were something that
we should have paid from the beginning.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: Fix premature xmin advancement during fast forward decoding

From
Amit Kapila
Date:
On Sat, Apr 26, 2025 at 12:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> On Fri, Apr 25, 2025 at 4:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > On Fri, Apr 25, 2025 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> > >
> > > What I'm concerned about is the back branches. With this approach all
> > > back branches will have such degradations and it doesn't make sense to
> > > me to optimize SnapBuildCommitTxn() codes in back branches.
> > >
> >
> > One possibility could be that instead of maintaining an entire
> > snapshot in fast_forward mode, we can maintain snapshot's xmin in each
> > ReorderBufferTXN. But then also, how would we get the minimum
> > txns_by_base_snapshot_lsn as we are getting now in
> > ReorderBufferGetOldestXmin? I think we need to traverse the entire
> > list of txns to get it in fast_forward mode but that may not show up
> > because it will not be done for each transaction. We can try such a
> > thing, but it won't be clean to have fast_forward specific code and
> > also it would be better to add such things only for HEAD.
>
> Agreed.
>

We may need to consider handling xmin in SnapBuildCommitTxn(), where
we also set the base snapshot.

> > Can you think of any better ideas?
>
> No idea. Hmm, there seems no reasonable way to fix this issue for back
> branches. I consented to the view that these costs were something that
> we should have paid from the beginning.
>

Right, I feel we should go with the simple change proposed by Hou-San
for now to fix the bug. If, in the future, we encounter any cases
where such optimizations can help for fast-forward mode, then we can
consider it. Does that sound reasonable to you?

--
With Regards,
Amit Kapila.



Re: Fix premature xmin advancement during fast forward decoding

From
Masahiko Sawada
Date:
On Sat, Apr 26, 2025 at 5:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Sat, Apr 26, 2025 at 12:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Fri, Apr 25, 2025 at 4:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > Can you think of any better ideas?
> >
> > No idea. Hmm, there seems no reasonable way to fix this issue for back
> > branches. I consented to the view that these costs were something that
> > we should have paid from the beginning.
> >
>
> Right, I feel we should go with the simple change proposed by Hou-San
> for now to fix the bug. If, in the future, we encounter any cases
> where such optimizations can help for fast-forward mode, then we can
> consider it. Does that sound reasonable to you?

Yes, agreed with this approach.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



RE: Fix premature xmin advancement during fast forward decoding

From
"Zhijie Hou (Fujitsu)"
Date:
On Sun, Apr 27, 2025 at 2:07 PM Masahiko Sawada wrote:
> 
> On Sat, Apr 26, 2025 at 5:01 AM Amit Kapila <amit.kapila16@gmail.com>
> wrote:
> >
> > On Sat, Apr 26, 2025 at 12:01 AM Masahiko Sawada
> <sawada.mshk@gmail.com> wrote:
> > >
> > > On Fri, Apr 25, 2025 at 4:42 AM Amit Kapila <amit.kapila16@gmail.com>
> wrote:
> > > >
> > > > Can you think of any better ideas?
> > >
> > > No idea. Hmm, there seems no reasonable way to fix this issue for
> > > back branches. I consented to the view that these costs were
> > > something that we should have paid from the beginning.
> > >
> >
> > Right, I feel we should go with the simple change proposed by Hou-San
> > for now to fix the bug. If, in the future, we encounter any cases
> > where such optimizations can help for fast-forward mode, then we can
> > consider it. Does that sound reasonable to you?
> 
> Yes, agreed with this approach.

+1. Thanks for the discussion!

Here is the V2 patch (including both HEAD and back-branch versions)
which merged Amit's suggestions for the comments. It can pass regression
and pgindent check.

I also adjusted the commit message to mention the commit f49a80c4
as suggested by Amit.

Best Regards,
Hou zj

Attachment

Re: Fix premature xmin advancement during fast forward decoding

From
Amit Kapila
Date:
On Sun, Apr 27, 2025 at 12:33 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:
>
> I also adjusted the commit message to mention the commit f49a80c4
> as suggested by Amit.
>

Pushed.

--
With Regards,
Amit Kapila.



Re: Fix premature xmin advancement during fast forward decoding

From
Masahiko Sawada
Date:
On Mon, Apr 28, 2025 at 2:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Sun, Apr 27, 2025 at 12:33 PM Zhijie Hou (Fujitsu)
> <houzj.fnst@fujitsu.com> wrote:
> >
> > I also adjusted the commit message to mention the commit f49a80c4
> > as suggested by Amit.
> >
>
> Pushed.

Thank you!

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com