Re: Integrating Replication into Core - Mailing list pgsql-hackers
From | Markus Schiltknecht |
---|---|
Subject | Re: Integrating Replication into Core |
Date | |
Msg-id | 4565A9B2.90004@bluegap.ch Whole thread Raw |
In response to | Re: Integrating Replication into Core (José Orlando Pereira <jop@lsd.di.uminho.pt>) |
Responses |
Re: Integrating Replication into Core
Re: Integrating Replication into Core |
List | pgsql-hackers |
Hi, [ I suggest to move from hackers to replica-hooks-discuss@pgfoundry.org, as that's what that list has been created for.] José Orlando Pereira wrote: > Sure, I know that you don't like hooks. Yes, but that's yet another story. ;-) > I just suggested that we should compare *interfaces* to configure replication > (i.e. variable names, grammar, etc), since it looks like we have a bunch of > different syntaxes to achieve the same. The same? Let's see. I currently have these additional commands: ALTER DATABASE testdb START REPLICATION IN GROUP testgroup USING egcs; and ALTER DATABASE testdb ACCEPT REPLICATION FROM GROUP testgroup USING egcs; I've added a system table pg_replication_gcs to describe the different group communication systems and connections to them: Table "pg_catalog.pg_replication_gcs" Column | Type | Modifiers ----------+---------+----------- rgcsname | name | not null rgcstype | integer | not null rgcsport | integer | not nullrgcssock | text | (Splitting into rgcsport and rgcssock prooved to be not very helpful.) And I've added two fields to pg_database to define the GCS and the group in which to replicate a database: .. datreplgcs | oid | not null .. datreplgrp | text | But as I said: these might change any time. And I certainly will have to add others, but no idea what those additions will look like. When comparing to the Mammoth Replicator syntax that Alvaro posted, this seems very different. PGCluster-II does not use a GCS at all. And I haven't seen others. > It is somewhat difficult to share a test-suite if we have to maintain multiple > versions of the code that sets up the replicated db. Well, we wouldn't have to share test cases, but at least the *suite*. All the code which starts and stops postmasters, does initdb etc.. Probably that's just me, but I'm not aware of any (OSS) project which can emulate a network (or even a GCS), start and stop processes as requested and check how they react upon different inputs. If you know such a thing, please email me! (I've looked at STAF, but that seems overly complex and targeted at completely different use-case.) > See the point? ;) Sure, but it's wishful thinking. > We do maintain a patch, as you do, unless you have forked from mainline for > good. Using a good revision control system helps (we use Cannonical's Bazaar, > BTW), but does not fundamentally change the problem. I'm using monotone. And I don't need much time to fiddle with patches. A simple 'mtn diff -r ${TRUNK_REVISION}' does all I need. That's why I'd still say that I don't maintain a patch. > The smaller the diff, the better. I disagree. Where exactly does size of the patch matter for you? The number you mean, which is important, is the number of points in the code where you need to interact with the database, i.e. the number of hooks you would need. Because as PostgreSQL moves along, changes at these points are probably necessary. But that number certainly has nothing to do with the patch size. Regards Markus
pgsql-hackers by date: