Re: BUG #17823: Generated columns not always updated correctly - Mailing list pgsql-bugs
| From | Tom Lane |
|---|---|
| Subject | Re: BUG #17823: Generated columns not always updated correctly |
| Date | |
| Msg-id | 3402993.1678120529@sss.pgh.pa.us Whole thread Raw |
| In response to | BUG #17823: Generated columns not always updated correctly (PG Bug reporting form <noreply@postgresql.org>) |
| Responses |
Re: BUG #17823: Generated columns not always updated correctly
|
| List | pgsql-bugs |
PG Bug reporting form <noreply@postgresql.org> writes:
> I found that the generated columns are sometimes not updated.
Yeah. Looking into nodeModifyTable.c, we miss re-doing
ExecComputeStoredGenerated when looping back after an EPQ update
(which is what this case is). I see that we also fail to redo that
after a cross-partition move, which is a bug since 8bf6ec3ba.
The attached seems to be enough to fix it, but I want to also devise
an isolation test for these cases ...
regards, tom lane
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 6f0543af83..6d9c0b0806 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -1910,9 +1910,10 @@ ExecUpdatePrologue(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
}
/*
- * ExecUpdatePrepareSlot -- subroutine for ExecUpdate
+ * ExecUpdatePrepareSlot -- subroutine for ExecUpdateAct
*
* Apply the final modifications to the tuple slot before the update.
+ * (This is split out because we also need it in the foreign-table code path.)
*/
static void
ExecUpdatePrepareSlot(ResultRelInfo *resultRelInfo,
@@ -1962,13 +1963,14 @@ ExecUpdateAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
updateCxt->crossPartUpdate = false;
/*
- * If we generate a new candidate tuple after EvalPlanQual testing, we
- * must loop back here and recheck any RLS policies and constraints. (We
- * don't need to redo triggers, however. If there are any BEFORE triggers
- * then trigger.c will have done table_tuple_lock to lock the correct
- * tuple, so there's no need to do them again.)
+ * If we move the tuple to a new partition, we loop back here to recompute
+ * GENERATED values (which are allowed to be different across partitions)
+ * and recheck any RLS policies and constraints. We do not fire any
+ * BEFORE triggers of the new partition, however.
*/
lreplace:
+ /* Fill in GENERATEd columns */
+ ExecUpdatePrepareSlot(resultRelInfo, slot, estate);
/* ensure slot is independent, consider e.g. EPQ */
ExecMaterializeSlot(slot);
@@ -2268,6 +2270,7 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
}
else if (resultRelInfo->ri_FdwRoutine)
{
+ /* Fill in GENERATEd columns */
ExecUpdatePrepareSlot(resultRelInfo, slot, estate);
/*
@@ -2290,9 +2293,13 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
}
else
{
- /* Fill in the slot appropriately */
- ExecUpdatePrepareSlot(resultRelInfo, slot, estate);
-
+ /*
+ * If we generate a new candidate tuple after EvalPlanQual testing, we
+ * must loop back here to try again. (We don't need to redo triggers,
+ * however. If there are any BEFORE triggers then trigger.c will have
+ * done table_tuple_lock to lock the correct tuple, so there's no need
+ * to do them again.)
+ */
redo_act:
result = ExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, slot,
canSetTag, &updateCxt);
@@ -2876,7 +2883,6 @@ lmerge_matched:
result = TM_Ok;
break;
}
- ExecUpdatePrepareSlot(resultRelInfo, newslot, context->estate);
result = ExecUpdateAct(context, resultRelInfo, tupleid, NULL,
newslot, false, &updateCxt);
if (result == TM_Ok && updateCxt.updated)
pgsql-bugs by date: