Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh) - Mailing list pgsql-hackers
From | Ned Lilly |
---|---|
Subject | Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh) |
Date | |
Msg-id | 3A01C82F.1030100@greatbridge.com Whole thread Raw |
In response to | Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh) (Bruce Momjian <pgman@candle.pha.pa.us>) |
Responses |
Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile
README pg_dumpaccounts.sh)
Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh) Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh) Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh) |
List | pgsql-hackers |
Well, here in relatively minor form is the First Example of a Great Bridge Priority (which Tom, Bruce, and Jan have all predicted would come... ;-) Our feeling is that DBAs will want to have the ability to backup user and group info, which you currently can't do with pg_dump. You *can* do it with pg_dumpall - but only if you dump every database you've got at the same time. Picture a professional environment where you might have many different databases running 24/7 - and doing a pg_dumpall across all of them at once just isn't practical. Most DBAs would prefer to stagger their regular backups in such an environment, one database at a time. Indeed, those backups are often on fixed schedules, at different times, for real business reasons. And if you do that, you can't backup the aforementioned system catalogs. That's what this pg_dumpaccounts is designed to do. As you've seen, it's very simple - it does the same COPY stuff that pg_dumpall does before calling pg_dump, just without the pg_dump. It's an inelegant solution, and shame on us for not catching the problem sooner. But it *is* a problem, albeit perhaps one that current PostgreSQL users haven't run into yet. We're concerned that people might have a false sense of security with pg_dump - that they might think if they backup one database, they're able to do a full restore. They're not. And like I said, there are situations when pg_dumpall isn't the appropriate solution. We recognize this is a temporary hack - and fully expect it to go away in 7.1 We actually think that the final solution might be more appropriate in pg_dump itself than pg_dumpall, but that's obviously a much more breakable proposition (hence the separate utility). I understand everyone's hesitation about adding a new utility this late in the process - and we're happy to be overruled on that (even if it's a discrete piece of code that wouldn't affect anything else...) I'm not wild about putting it in /contrib, but if that's what everyone wants to do, ok. Have we adequately explained the need for this? Or do people think it's not necessary? If it *is* necessary (or at least worthwhile), is it the consensus of the -hackers community that it go in /contrib? Thanks, Ned Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > >> I think the issue is that we don't want to risk breaking pg_dumpall in a >> minor release. > > No we don't, but I agree with Peter that pg_dumpall is the place for > this feature in the long run. A separate contrib script is not going > to get maintained. > > What I want to know is why we are adding features at all in a minor > release. Especially 24 or so hours before release, when there is > certainly no time for any testing worthy of the name. Contrib or no > contrib, I think this is a bad idea and a bad precedent. > > regards, tom lane > > -- ---------------------------------------------------- Ned Lilly e: ned@greatbridge.com Vice President w: www.greatbridge.com Evangelism / Hacker Relations v: 757.233.5523 Great Bridge, LLC f: 757.233.5555
pgsql-hackers by date: