Re: Duplicate history file? - Mailing list pgsql-hackers
From | Stephen Frost |
---|---|
Subject | Re: Duplicate history file? |
Date | |
Msg-id | 20210615153309.GY20766@tamriel.snowman.net Whole thread Raw |
In response to | Re: Duplicate history file? (Kyotaro Horiguchi <horikyota.ntt@gmail.com>) |
Responses |
Re: Duplicate history file?
|
List | pgsql-hackers |
Greetings, * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote: > At Fri, 11 Jun 2021 16:08:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in > > On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote: > > > I think cp can be an example as far as we explain the limitations. (On > > > the other hand "test !-f" cannot since it actually prevents server > > > from working correctly.) > > > > Disagreed. I think that we should not try to change this area until > > we can document a reliable solution, and a simple "cp" is not that. > > Isn't removing cp from the documentation a change in this area? I > basically agree to not to change anything but the current example > "test ! -f <fn> && cp .." and relevant description has been known to > be problematic in a certain situation. [...] > - Write the full (known) requirements and use a pseudo tool-name in > the example? I'm generally in favor of just using a pseudo tool-name and then perhaps providing a link to a new place on .Org where people can ask to have their PG backup solution listed, or something along those lines. > - provide a minimal implement of the command? Having been down this road for a rather long time, I can't accept this as a serious suggestion. No, not even with Perl. Been there, done that, not going back. > - recommend some external tools (that we can guarantee that they > comform the requriements)? The requirements are things which are learned over years and changes over time. Trying to document them and keep up with them would be a pretty serious project all on its own. There are external projects who spend serious time and energy doing their best to provide the tooling needed here and we should be promoting those, not trying to pretend like this is a simple thing which anyone could write a short perl script to accomplish. > - not recommend any tools? This is the approach that has been tried and it's, objectively, failed miserably. Our users are ending up with invalid and unusable backups, corrupted WAL segments, inability to use PITR, and various other issues because we've been trying to pretend that this isn't a hard problem. We really need to stop that and accept that it's hard and promote the tools which have been explicitly written to address that hard problem. > > Hmm. A simple command that could be used as reference is for example > > "dd" that flushes the file by itself, or we could just revisit the > > discussions about having a pg_copy command, or we could document a > > small utility in perl that does the job. > > I think we should do that if pg_copy comforms the mandatory > requirements but maybe it's in the future. Showing the minimal > implement in perl looks good. Already tried doing it in perl. No, it's not simple and it's also entirely vaporware today and implies that we're going to develop this tool, improve it in the future as we realize it needs to be improved, and maintain it as part of core forever. If we want to actually adopt and pull in a backup tool to be part of core then we should talk about things which actually exist, such as the various existing projects that have been written to specifically work to address all the requirements which are understood today, not say "well, we can just write a simple perl script to do it" because it's not actually that simple. Providing yet another half solution would be doubling-down on the failed approach to document a "simple" solution and would be a disservice to our users. Thanks, Stephen
Attachment
pgsql-hackers by date: