Re: backup manifests - Mailing list pgsql-hackers
From | Stephen Frost |
---|---|
Subject | Re: backup manifests |
Date | |
Msg-id | 20200326193711.GX13712@tamriel.snowman.net Whole thread Raw |
In response to | Re: backup manifests (Mark Dilger <mark.dilger@enterprisedb.com>) |
Responses |
Re: backup manifests
Re: backup manifests |
List | pgsql-hackers |
Greetings, * Mark Dilger (mark.dilger@enterprisedb.com) wrote: > > On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost@snowman.net> wrote: > > I'm not actually argueing about which hash functions we should support, > > but rather what the default is and if crc32c, specifically, is actually > > a reasonable choice. Just because it's fast and we already had an > > implementation of it doesn't justify its use as the default. Given that > > it doesn't actually provide the check that is generally expected of > > CRC checksums (100% detection of single-bit errors) when the file size > > gets over 512MB makes me wonder if we should have it at all, yes, but it > > definitely makes me think it shouldn't be our default. > > I don't understand your focus on the single-bit error issue. Maybe I'm wrong, but my understanding was that detecting single-bit errors was one of the primary design goals of CRC and why people talk about CRCs of certain sizes having 'limits'- that's the size at which single-bit errors will no longer, necessarily, be picked up and therefore that's where the CRC of that size starts falling down on that goal. > If you are sending your backup across the wire, single bit errors during transmission should already be detected as partof the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are mostlikely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruptionon the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm),the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysisof single-bit error detection against file size, and on the other hand, the verification of backups. We'd like something that does a good job at detecting any differences between when the file was copied off of the server and when the command is run- potentially weeks or months later. I would expect most issues to end up being storage-level corruption over time where the backup is stored, which could be single bit flips or whole pages getting zeroed or various other things. Files changing size probably is one of the less common things, but, sure, that too. That we could take this "arbitrarily far" is actually entirely fine- that's a good reason to have alternatives, which this patch does have, but that doesn't mean we should have a default that's not suitable for the files that we know we're going to be storing. Consider that we could have used a 16-bit CRC instead, but does that actually make sense? Ok, sure, maybe someone really wants something super fast- but should that be our default? If not, then what criteria should we use for the default? > From a support perspective, I think the much more important issue is making certain that checksums are turned on. A onein a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that yourcustomer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling thatoption? The argument is that adding checksums takes more time. I can understand that argument, though I don't really agree with it. Certainly a few percent really shouldn't be that big of an issue, and in many cases even a sha256 hash isn't going to have that dramatic of an impact on the actual overall time. Thanks, Stephen
Attachment
pgsql-hackers by date: