Re: max_locks_per_transactions ... - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: max_locks_per_transactions ...
Date
Msg-id 1170329621.3681.508.camel@silverbirch.site
Whole thread Raw
In response to max_locks_per_transactions ...  (Hans-Juergen Schoenig <postgres@cybertec.at>)
Responses Re: max_locks_per_transactions ...
List pgsql-hackers
On Thu, 2007-02-01 at 09:15 +0100, Hans-Juergen Schoenig wrote:
> Right now max_locks_per_transactions defines the average number of locks 
> taken by a transaction. thus, shared memory is limited to 
> max_locks_per_transaction * (max_connections + max_prepared_transactions).
> this is basically perfect. however, recently we have seen a couple of 
> people having trouble with this. partitioned tables are becoming more 
> and more popular so it is very likely that a single transaction can eat 
> up a great deal of shared memory. some people having a lot of data 
> create daily tables. if done for 3 years we already lost 1000 locks per 
> inheritance-structure.
> 
> i wonder if it would make sense to split max_locks_per_transaction into 
> two variables: max_locks (global size) and max_transaction_locks (local 
> size). if set properly this would prevent "good" short running 
> transactions from running out of shared memory when some "evil" long 
> running transactions start to suck up shared memory.

Do partitioned tables use a lock even when they are removed from the
plan as a result of constraint_exclusion? I thought not. So you have
lots of concurrent multi-partition scans.

I'm not sure I understand your suggestion. It sounds like you want to
limit the number of locks an individual backend can take, which simply
makes the partitioned queries fail, no?

Perhaps we should just set the default higher?

--  Simon Riggs              EnterpriseDB   http://www.enterprisedb.com




pgsql-hackers by date:

Previous
From: "Simon Riggs"
Date:
Subject: Re: Data archiving/warehousing idea
Next
From: "Pavan Deolasee"
Date:
Subject: Re: stack usage in toast_insert_or_update()