Thread: [GENERAL] Postgres backup solution
________________________________ Attention: The information contained in this message and or attachments is intended only for the person or entity to which it is addressedand may contain confidential and/or privileged material. Any review, retransmission, dissemination or other useof, or taking of any action in reliance upon, this information by persons or entities other than the intended recipientis prohibited. If you received this in error, please contact the sender and delete the material from any systemand destroy any copies.
Attachment
On 3/14/2017 12:31 PM, Lawrence Cohan wrote:
Subject:Postgres backup solution From:Lawrence Cohan <LCohan@web.com> Date:3/14/2017 12:31 PM
To:"pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
was there supposed to be a question or statement or something here ?
-- john r pierce, recycling bits in santa cruz
________________________________ Attention: The information contained in this message and or attachments is intended only for the person or entity to which it is addressedand may contain confidential and/or privileged material. Any review, retransmission, dissemination or other useof, or taking of any action in reliance upon, this information by persons or entities other than the intended recipientis prohibited. If you received this in error, please contact the sender and delete the material from any systemand destroy any copies.
Attachment
Your message is not diplaying. At least not for me. I guess that my reader does not understand the "smime.p7m" file, which shows as an attachment. For others, his question is:
=== original question from Lawrence Cohan ===
Yes, this is what I intended to ask:
What would be a recommended solution for backing up a very large Postgres
(~13TeraBytes) database in order to prevent from data deletion/corruption.
Current setup is only to backup/restore to a standby read-only Postgres server
via AWS S3 using wal-e however this does not offer the comfort of keeping a
full backup available in case we need to restore some deleted or corrupted
data.
Thanks,
Lawrence Cohan
===
On Tue, Mar 14, 2017 at 2:57 PM, Lawrence Cohan <LCohan@web.com> wrote:
________________________________
Attention:
The information contained in this message and or attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any system and destroy any copies.
---------- Forwarded message ----------
From: Lawrence Cohan <LCohan@web.com>
To: John R Pierce <pierce@hogranch.com>, "pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
Cc:
Bcc:
Date: Tue, 14 Mar 2017 15:57:39 -0400
Subject: RE: [GENERAL] Postgres backup solution
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
"Irrigation of the land with seawater desalinated by fusion power is ancient. It's called 'rain'." -- Michael McClary, in alt.fusion
Maranatha! <><
John McKown
John McKown
Lawrence, First off, I strongly recommend that you figure out how to send regular plain-text emails, at least to this mailing list, as the whole "winmail.dat" thing is going to throw people off and you're unlikely to get many responses because of it. Regarding your question.. * Lawrence Cohan (LCohan@web.com) wrote: > What would be a recommended solution for backing up a very large Postgres > (~13TeraBytes) database in order to prevent from data deletion/corruption. > Current setup is only to backup/restore to a standby read-only Postgres server > via AWS S3 using wal-e however this does not offer the comfort of keeping a > full backup available in case we need to restore some deleted or corrupted > data. If the goal is to be able to do partial restores (such as just one table) then your best bet is probably to use pg_dump. Given the size of your database, you'll probably want to pg_dump in directory format and then send each of those files to S3 (assuming you wish to continue using S3 for backups). Note that pg_dump doesn't directly support S3 currently. Also, the pg_dump will hold open a transaction for a long time, which may be an issue depending on your environment. If you're looking for file-based backups of the entire cluster and don't mind using regular non-S3 storage then you might consider pgBackrest or barman. With file-based backups, you have to restore at least an entire database to be able to pull out data from it. We are working to add S3 support to pgBackrest, but it's not there today. Thanks! Stephen
Attachment
________________________________ Attention: The information contained in this message and or attachments is intended only for the person or entity to which it is addressedand may contain confidential and/or privileged material. Any review, retransmission, dissemination or other useof, or taking of any action in reliance upon, this information by persons or entities other than the intended recipientis prohibited. If you received this in error, please contact the sender and delete the material from any systemand destroy any copies.
Attachment
On 15 March 2017 at 03:04, John McKown <john.archie.mckown@gmail.com> wrote: > Your message is not diplaying. At least not for me. I guess that my reader > does not understand the "smime.p7m" file, which shows as an attachment. For > others, his question is: > > === original question from Lawrence Cohan === > > Yes, this is what I intended to ask: > > What would be a recommended solution for backing up a very large Postgres > (~13TeraBytes) database in order to prevent from data deletion/corruption. > Current setup is only to backup/restore to a standby read-only Postgres > server > via AWS S3 using wal-e however this does not offer the comfort of keeping a > full backup available in case we need to restore some deleted or corrupted > data. 'wal-e backup-push' will store a complete backup in S3, which can be restored using 'wal-e backup-fetch'. And since you are already using wal-e for log shipping, you get full PITR available. pg_dump for a logical backup is also a possibility, although with 13TB you probably don't want to hold a transaction open that long and are better off with wal-e, barman or other binary backup tool. -- Stuart Bishop <stuart@stuartbishop.net> http://www.stuartbishop.net/