Re: Add the ability to limit the amount of memory that can be allocated to backends. - Mailing list pgsql-hackers

From Andrei Lepikhov
Subject Re: Add the ability to limit the amount of memory that can be allocated to backends.
Date
Msg-id 268e0ac7-8a81-4d65-8b40-b62c4b3f1bf9@postgrespro.ru
Whole thread Raw
In response to Re: Add the ability to limit the amount of memory that can be allocated to backends.  (Andrei Lepikhov <a.lepikhov@postgrespro.ru>)
Responses Re: Add the ability to limit the amount of memory that can be allocated to backends.
List pgsql-hackers
On 29/9/2023 09:52, Andrei Lepikhov wrote:
> On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:
>> Attach patches updated to master.
>> Pulled from patch 2 back to patch 1 a change that was also pertinent 
>> to patch 1.
> +1 to the idea, have doubts on the implementation.
> 
> I have a question. I see the feature triggers ERROR on the exceeding of 
> the memory limit. The superior PG_CATCH() section will handle the error. 
> As I see, many such sections use memory allocations. What if some 
> routine, like the CopyErrorData(), exceeds the limit, too? In this case, 
> we could repeat the error until the top PG_CATCH(). Is this correct 
> behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for 
> recursion and allow error handlers to slightly exceed this hard limit?
By the patch in attachment I try to show which sort of problems I'm 
worrying about. In some PП_CATCH() sections we do CopyErrorData 
(allocate some memory) before aborting the transaction. So, the 
allocation error can move us out of this section before aborting. We 
await for soft ERROR message but will face more hard consequences.

-- 
regards,
Andrey Lepikhov
Postgres Professional

Attachment

pgsql-hackers by date:

Previous
From: Aleksander Alekseev
Date:
Subject: Re: Modernize const handling with readline
Next
From: "Drouvot, Bertrand"
Date:
Subject: Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag