Re: Per-database and per-user GUC settings - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Re: Per-database and per-user GUC settings |
Date | |
Msg-id | 2744.1012491938@sss.pgh.pa.us Whole thread Raw |
In response to | Re: Per-database and per-user GUC settings (Peter Eisentraut <peter_e@gmx.net>) |
Responses |
Re: Per-database and per-user GUC settings
|
List | pgsql-hackers |
I've thought of some issues that I think will need to be addressed before per-database/per-user GUC settings can become useful. One thing that's bothered me for awhile is that GUC doesn't retain any memory of how a variable acquired its present value. It tries to resolve conflicts between different sources of values just by processing the sources in "the right order". However this cannot work in general. Some examples: 1. postgresql.conf contains a setting for some variable, say sort_mem=1000. DBA starts postmaster with a command-line option to override the variable, say --sort_mem=2000. Works fine, until he SIGHUPs the postmaster for some unrelated reason, at which point sort_mem snaps back to 1000. 2. User starts a session and says SET sort_mem=2000. Again, he successfully overrides the postgresql.conf value ... but only as long as he doesn't get SIGHUP'd. These problems will get very substantially worse once we add per-database and per-user GUC settings to the set of possible value sources. I believe the correct fix is for GUC to define a prioritized list of value sources (similar to the existing PGC_ settings, but probably not quite the same) and remember which source gave the current setting for each variable. Comparing that to the source of a would-be new value tells you whether to accept or ignore the new value. This would make GUC processing order-insensitive which would be a considerable improvement (eg, I think you could get rid of the ugly double-scan-of-options hack in postmaster.c). Another thought: DBAs will probably expect that if they change per-database/per-user GUC settings, they can SIGHUP to make existing backends take on those settings. Can we support this? If the HUP is received outside any transaction then I guess we could start a temporary transaction to read the tables involved. If we try to process HUP at a command boundary inside a transaction then we risk aborting the whole user's transaction if there's an error. Arguably HUP should not be accepted while a transaction is in progress anyway, so the simplest answer might be to not process HUP until we are at the idle loop and there's no open transaction block. The whole subject of reacting to errors in the per-database/per-user GUC settings needs more thought, too. Worst case scenario: superuser messes up his own per-user GUC settings to the point that he can't log in anymore. Can we provide an escape hatch, or is he looking at an initdb situation (without even the chance to run pg_dump first :-()? I think the GUC code presently tries to avoid any elog while processing postgresql.conf, so that it won't be the cause of backend startup failures, but I'm not convinced that that approach scales. Certainly if we are reading tables we cannot absolutely guarantee no elog. regards, tom lane
pgsql-hackers by date: