Re: Re: BUG #12990: Missing pg_multixact/members files (appears to have wrapped, then truncated) - Mailing list pgsql-bugs
From | Kevin Grittner |
---|---|
Subject | Re: Re: BUG #12990: Missing pg_multixact/members files (appears to have wrapped, then truncated) |
Date | |
Msg-id | 824444825.729337.1430865378314.JavaMail.yahoo@mail.yahoo.com Whole thread Raw |
In response to | Re: Re: BUG #12990: Missing pg_multixact/members files (appears to have wrapped, then truncated) (Thomas Munro <thomas.munro@enterprisedb.com>) |
Responses |
Re: Re: BUG #12990: Missing pg_multixact/members files
(appears to have wrapped, then truncated)
|
List | pgsql-bugs |
Thomas Munro <thomas.munro@enterprisedb.com> wrote: > On Wed, May 6, 2015 at 9:26 AM, Robert Haas <robertmhaas@gmail.com> wrote: >> On Tue, May 5, 2015 at 3:58 AM, Thomas Munro <thomas.munro@enterprisedb.com> wrote: >>> Ok, so if you have autovacuum_freeze_max_age = 400 million multixacts >>> before wraparound vacuum, which is ~10% of 2^32, we would interpret >>> that to mean 400 million multixacts OR ~10% * some_constant of member >>> space, in other worlds autovacuum_freeze_max_age * some_constant >>> members, whichever comes first. But what should some_constant be? >> >> some_constant should be all the member space there is. So we trigger >> autovac if we've used more than ~10% of the offsets OR more than ~10% >> of the members. Why is autovacuum_multixact_freeze_max_age >> configurable in the place? It's configurable so that you can set it >> low enough that wraparound scans complete and advance the minmxid >> before you hit the wall, but high enough to avoid excessive scanning. >> The only problem is that it only lets you configure the amount of >> headroom you need for offsets, not members. If you squint at what I'm >> proposing the right way, it's essentially that that GUC should control >> both of those things. > > But member space *always* grows at least twice as fast as offset space > (aka active multixact IDs), because multixacts always have at least 2 > members (except in some rare cases IIUC), don't they? So if we do > what you just said, then we'll trigger wraparound vacuums twice as > soon as we do now for everybody, even people who don't have any > problem with member space management. We don't want this patch to > change anything for most people, let alone everyone. That, I think, is what has been driving this patch away from just considering the *_multixact_* settings as applying to both the members SLRU and the offsets SLRU; that would effectively simply change the monitored resource from one to the other. (We would probably want to actually use the max of the two, just to be safe, but that offsets might never actually be the trigger.) As Thomas says, that would be a big change for everyone, and not everyone necessarily *wants* their existing settings to have new and different meanings. > So I think that > some_constant should be at least 2, if we try to do it this way, in > other words if you set the GUC for 10% of offset space, we also start > triggering wraparounds at 20% of member space. But what if they configure it to start at 80% (which I *have* seen people do)? The early patches were a heuristic to attempt to allow current behavior for those not getting into trouble, and gradually ramp up aggressiveness as needed to prevent hitting the hard ERROR that now prevents wraparound. Perhaps, rather than reducing the threshold gradually, as the members SLRU approaches wraparound we could gradually shift from using offsets to members as the number we compare the thresholds to. Up to 25% of maximum members, or if offset is somehow larger, we just use offsets; else above 75% maximum members we use members; else we use a weighted average based on how far we are between 25% and 75%. It's kinda weird, but I think it gives us a reasonable way to ramp up up vacuum aggressiveness from what we currently do toward what Robert proposed based on whether the workload is causing things to head for trouble. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
pgsql-bugs by date: