Thread: Postgres table contents versioning
Is there an equivalent of svn/git etc. for the data in a database's tables? Can I set something up so that I can see what was in the table two days/months etc. ago? I realize that in the case of rapidly changing hundred million row tables this presents an impossible problem. The best kludge I can think of is copying the tables to a directory and git-ing the directory. Thanks, John
In response to John Gage : > Is there an equivalent of svn/git etc. for the data in a database's > tables? > > Can I set something up so that I can see what was in the table two > days/months etc. ago? You can use tablelog: 15:53 < akretschmer> ??tablelog 15:53 < pg_docbot_adz> For information about 'tablelog' see: 15:53 < pg_docbot_adz> http://andreas.scherbaum.la/blog/archives/100-Log-Table-Changes-in-PostgreSQL-with-tablelog.html 15:53 < pg_docbot_adz> http://pgfoundry.org/projects/emaj/ 15:53 < pg_docbot_adz> http://pgfoundry.org/projects/tablelog/ -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99
John Gage <jsmgage@numericable.fr> wrote: > Is there an equivalent of svn/git etc. for the data in a > database's tables? > Can I set something up so that I can see what was in the > table two days/months etc. ago? > I realize that in the case of rapidly changing hundred > million row tables this presents an impossible problem. > The best kludge I can think of is copying the tables to a > directory and git-ing the directory. If you're looking at this from a disaster recovery point of view, you should read up on PostgreSQL's PITR capabilities. If you need the information in your application, you should google for "temporal databases" on how to amend your table structures. Tim
jsmgage@numericable.fr (John Gage) writes: > Is there an equivalent of svn/git etc. for the data in a database's > tables? > > Can I set something up so that I can see what was in the table two > days/months etc. ago? > > I realize that in the case of rapidly changing hundred million row > tables this presents an impossible problem. > > The best kludge I can think of is copying the tables to a directory > and git-ing the directory. There's a whole set of literature on the notion of temporal data. Richard Snodgrass' book is rather good. <http://www.cs.arizona.edu/people/rts/> The typical approach involves adding a timestamp or two to tables to indicate when the data is considered valid. That's rather different from Git :-). -- output = ("cbbrowne" "@" "gmail.com") http://linuxdatabases.info/info/languages.html HEADLINE: Suicidal twin kills sister by mistake!