Re: Prevent internal error at concurrent CREATE OR REPLACE FUNCTION - Mailing list pgsql-hackers

From Daniil Davydov
Subject Re: Prevent internal error at concurrent CREATE OR REPLACE FUNCTION
Date
Msg-id CAJDiXghQBHzOC5Ez98nZdzrEt99Dpocy5kZxiqtOZx_HzW1Wtw@mail.gmail.com
Whole thread Raw
In response to Prevent internal error at concurrent CREATE OR REPLACE FUNCTION  (Yugo Nagata <nagata@sraoss.co.jp>)
List pgsql-hackers
Hi,

On Thu, Jul 3, 2025 at 9:18 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:
>
> On Tue, 1 Jul 2025 18:56:11 +0700
> Daniil Davydov <3danissimo@gmail.com> wrote:
>
> > CatalogIndexInsert is kinda "popular" function. It can be called in
> > different situations, not in all of which a violation of unique
> > constraint means an error due to competitiveness.
> >
> > For example, with this patch such a query : "CREATE TYPE mood AS ENUM
> > ('happy', 'sad', 'happy');"
> > Will throw this error : "operation failed due to a concurrent command"
> > Of course, it isn't true
>
> You're right — this error is not caused by a concurrent command.
> However, I believe the error message in cases like creating an ENUM type with
> duplicate labels could be improved to explain the issue more clearly, rather
> than just reporting it as a unique constraint violation.
>
> In any case, a unique constraint violation in a system catalog is not necessarily
> due to concurrent DDL. Therefore, the error message shouldn't suggest that as the
> only cause. Instead, it should clearly report the constraint violation as the primary
> issue, and mention concurrent DDL as just one possible explanation in HINT.
>
> I've updated the patch accordingly to reflect this direction in the error message.
>
>  ERROR:  operation failed due to duplicate key object
>  DETAIL:  Key (proname, proargtypes, pronamespace)=(fnc, , 2200) already exists in unique index
pg_proc_proname_args_nsp_index.
>  HINT:  Another command might have created a object with the same key in a concurrent session.
>
> However, as a result, the message ends up being similar to the current one raised
> by the btree code, so the overall improvement in user-friendliness might be limited.
>

Thanks for updating the patch!
+1 for adding such a hint for this error.

> > That is why I suggested handling unique violations exactly inside
> > ProcedureCreate - the only place where we can be sure about reasons of
> > error.
>
> If we were to fix the error message outside of CatalogIndexInsert, we would need to
> modify CatalogTupleInsert, CatalogTupleUpdate, and related functions to allow them to
> report the failure appropriately.
>
> You suggested using PG_TRY/PG_CATCH, but these do not suppress the error message from
> the btree code, so this approach seems not to fully address the issue.
>
> Moreover, the places affected are not limited to ProcedureCreate, for example,
> concurrent CREATE TABLE commands can also lead to the same situation, and possibly
> other commands as well.

Actually, we can suppress errors from btree (by flushing error context
and creating another), but it doesn't look like the best design
decision.
It was an idea for one concrete error fix. Anyway,I like the
correction you suggested better.


--
Best regards,
Daniil Davydov



pgsql-hackers by date:

Previous
From: Daniil Davydov
Date:
Subject: Re: Speedup truncations of temporary relation forks
Next
From: Amit Kapila
Date:
Subject: Re: Using failover slots for PG-non_PG logical replication