Re: [COMMITTERS] pgsql: Introduce dynamic shared memory areas. - Mailing list pgsql-hackers
From | Robert Haas |
---|---|
Subject | Re: [COMMITTERS] pgsql: Introduce dynamic shared memory areas. |
Date | |
Msg-id | CA+TgmoYORfcZcTehFuV2VBz9AYN0=P7g15GwcDQaHwf9QLqiYA@mail.gmail.com Whole thread Raw |
In response to | Re: [COMMITTERS] pgsql: Introduce dynamic shared memory areas. (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: [COMMITTERS] pgsql: Introduce dynamic shared memory areas.
|
List | pgsql-hackers |
On Mon, Dec 5, 2016 at 1:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Mon, Dec 5, 2016 at 12:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Given your later argumentation, I wonder why we're trying to implement >>> any kind of limit at all, rather than just operating on the principle >>> that it's the kernel's problem to enforce a limit. In short, maybe >>> removing max_total_segment_size would do fine. > >> Well, if we did that, then we'd have to remove dsa_set_size_limit(). >> I don't want to do that, because I think it's useful for the calling >> code to be able to ask this code to enforce a limit that may be less >> than the point at which allocations would start failing. We do that >> sort of thing all the time (e.g. work_mem, max_locks_per_transaction) >> for good reasons. Let's not re-engineer this feature now on the >> strength of "it produces a compiler warning". I think the easiest >> thing to do here is change SIZE_MAX to (Size) -1. If there are deeper >> problems that need to be addressed, we can consider those separately. > > Well, that would solve my immediate problem, which is to be able to > finish the testing Heikki requested on gaur/pademelon. But I am not > terribly impressed with the concept here. For example, this bit: > > /* > * If the total size limit is already exceeded, then we exit early and > * avoid arithmetic wraparound in the unsigned expressions below. > */ > if (area->control->total_segment_size >= > area->control->max_total_segment_size) > return NULL; > > is a no-op, because it's physically impossible to exceed SIZE_MAX, and > that leads me to wonder whether the claim that this test helps prevent > arithmetic overflow has any substance. Eh? Sure, if no limit has been set via dsa_set_size_limit(), then that test is a no-op. But if one has been set, then it isn't a no-op. Which is of course the point. > Also, given that the idea of > DSA seems to be to support a number of different use-cases, I'm not > sure that it's useful to have a knob that limits the total consumption > rather than individual areas. IOW: who exactly is going to call > dsa_set_size_limit, and on what grounds would they compute the limit? The code that creates the DSA is going to call it. For example, suppose we change the stats collector or pg_stat_statements to use this as a backing store.Or we modify the heavyweight lock manager to use DSM so that the maximum size of the lock table can be changed without restarting the server. Or we implement a shared plan cache. In any of those cases, there will (presumably) be a GUC that limits how much memory the relevant facility is allowed to chew up, and that could be implemented by calling dsa_set_size_limit() to establish a limit matching the value of that GUC. Now maybe we'll never do any of those things or maybe we'll implement the limit entirely on the caller side, but I don't care to presume that. Also, we could use this facility to manage memory mapped at fixed addresses within the main shared memory segment, if we wanted to be able to allocate and free memory more flexibly in that arena. If we did that, we'd use dsa_set_size_limit() to prevent that DSA from creating additional DSMs, so that it only managed the fixed chunk of address space originally given to it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
pgsql-hackers by date: