Thread: Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0
Moving over a conversation from the pgsql-advocacy mailing list. In it Simon (CC'd) raised the issue of potentially creating a backwards-compatibility breaking release at some point in the future, to deal with things that might have no other solution (my wording). Relevant part of that thread there for reference: http://www.postgresql.org/message-id/CANP8+jLtk1NtaJyXc=hAqX=0k+ku4zfavgVBKfs+_sOr9hepNQ@mail.gmail.com Simon included a short starter list of potentials which might be in that category: * SQL compliant identifiers * Remove RULEs * Change recovery.conf * Change block headers * Retire template0, template1 *Optimise FSM * Add heap metapage * Alter tuple headers et al This still is better placed on -hackers though, so lets have the conversation here to figure out if a "backwards compatibility breaking" release really is needed or not. Hopefully we can get it all done without giving users a reason to consider switching. ;) Regards and best wishes, Justin Clift -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
Justin Clift <justin@postgresql.org> writes: > Moving over a conversation from the pgsql-advocacy mailing list. In it > Simon (CC'd) raised the issue of potentially creating a backwards-compatibility > breaking release at some point in the future, to deal with things that > might have no other solution (my wording). > > Relevant part of that thread there for reference: > > http://www.postgresql.org/message-id/CANP8+jLtk1NtaJyXc=hAqX=0k+ku4zfavgVBKfs+_sOr9hepNQ@mail.gmail.com > > Simon included a short starter list of potentials which might be in > that category: > > * SQL compliant identifiers > * Remove RULEs > * Change recovery.conf > * Change block headers > * Retire template0, template1 > * Optimise FSM > * Add heap metapage > * Alter tuple headers > et al > > This still is better placed on -hackers though, so lets have the > conversation here to figure out if a "backwards compatibility breaking" > release really is needed or not. > > Hopefully we can get it all done without giving users a reason to consider > switching. ;) I'm sure this won't be a popular suggestion, but in the interest of advocating for more cryptography: if we land GSSAPI auth+encryption, I'd like the auth-only codepath to go away.
On Mon, Apr 11, 2016 at 3:23 PM, Robbie Harwood <rharwood@redhat.com> wrote: > I'm sure this won't be a popular suggestion, but in the interest of > advocating for more cryptography: if we land GSSAPI auth+encryption, I'd > like the auth-only codepath to go away. I can't think of a reason that would be a good idea. Occasionally good things come from artificially limiting how users can use the system, but mostly that just annoys people. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 12 April 2016 at 00:39, Justin Clift <justin@postgresql.org> wrote:
Moving over a conversation from the pgsql-advocacy mailing list. In it
Simon (CC'd) raised the issue of potentially creating a backwards-compatibility
breaking release at some point in the future, to deal with things that
might have no other solution (my wording).
Relevant part of that thread there for reference:
http://www.postgresql.org/message-id/CANP8+jLtk1NtaJyXc=hAqX=0k+ku4zfavgVBKfs+_sOr9hepNQ@mail.gmail.com
Simon included a short starter list of potentials which might be in
that category:
* SQL compliant identifiers
* Remove RULEs
* Change recovery.conf
* Change block headers
* Retire template0, template1
* Optimise FSM
* Add heap metapage
* Alter tuple headers
et al
+
* v4 protocol (feature negotiation, lazy blob fetching, etc)
* retire pg_hba.conf and use SQL access management
?
Justin Clift wrote: > Simon included a short starter list of potentials which might be in > that category: > > * SQL compliant identifiers > * Remove RULEs > * Change recovery.conf > * Change block headers > * Retire template0, template1 > * Optimise FSM > * Add heap metapage > * Alter tuple headers > et al + CMake build I think. Now I can build: * postgres * bin/* programs * pl/* languages * contrib/* (with cmake PGXS analogue) Can run regression and isolation tests for postgres/pl* and all contrib modules. There is still a lot of work but I hope everything will turn out. Also it would be good to get help. Thanks. PS https://github.com/stalkerg/postgres_cmake -- Yury Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
On 12 Apr 2016, at 14:12, Yury Zhuravlev <u.zhuravlev@postgrespro.ru> wrote: > Justin Clift wrote: >> Simon included a short starter list of potentials which might be in >> that category: >> >> * SQL compliant identifiers >> * Remove RULEs >> * Change recovery.conf >> * Change block headers >> * Retire template0, template1 >> * Optimise FSM >> * Add heap metapage >> * Alter tuple headers >> et al > > + CMake build I think. > > Now I can build: > * postgres > * bin/* programs > * pl/* languages > * contrib/* (with cmake PGXS analogue) > > Can run regression and isolation tests for postgres/pl* and all contrib modules. > There is still a lot of work but I hope everything will turn out. Also it would be good to get help. > > Thanks. > > PS https://github.com/stalkerg/postgres_cmake If/when PostgreSQL can be built and tested with CMake... why would the resulting code + database files + network protocol (etc) not be compatible with previous versions? :) + Justin -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
On Mon, Apr 11, 2016 at 11:39 AM, Justin Clift <justin@postgresql.org> wrote: > Moving over a conversation from the pgsql-advocacy mailing list. In it > Simon (CC'd) raised the issue of potentially creating a backwards-compatibility > breaking release at some point in the future, to deal with things that > might have no other solution (my wording). > > Relevant part of that thread there for reference: > > http://www.postgresql.org/message-id/CANP8+jLtk1NtaJyXc=hAqX=0k+ku4zfavgVBKfs+_sOr9hepNQ@mail.gmail.com > > Simon included a short starter list of potentials which might be in > that category: > > * SQL compliant identifiers > * Remove RULEs > * Change recovery.conf > * Change block headers > * Retire template0, template1 > * Optimise FSM > * Add heap metapage > * Alter tuple headers > et al > > This still is better placed on -hackers though, so lets have the > conversation here to figure out if a "backwards compatibility breaking" > release really is needed or not. A couple of points here: *) I don't think having a version number that starts with 10 instead of 9 magically fixes backwards compatibility problems and I think that's a dangerous precedent to set unless we're willing to fork development and support version 9 indefinitely including major release versions. *) Compatibility issues at the SQL level have to be taken much more seriously than other things (like internal layouts or .conf issues). *) We need to do an honest cost benefit analysis before breaking things. Code refactors placed on your users puts an enormous cost that is often underestimated. I have some fairly specific examples of the costs related to the text cast removal for example. It's not a pretty picture. merlin
On 12 Apr 2016, at 17:23, Merlin Moncure <mmoncure@gmail.com> wrote: > On Mon, Apr 11, 2016 at 11:39 AM, Justin Clift <justin@postgresql.org> wrote: >> Moving over a conversation from the pgsql-advocacy mailing list. In it >> Simon (CC'd) raised the issue of potentially creating a backwards-compatibility >> breaking release at some point in the future, to deal with things that >> might have no other solution (my wording). >> >> Relevant part of that thread there for reference: >> >> http://www.postgresql.org/message-id/CANP8+jLtk1NtaJyXc=hAqX=0k+ku4zfavgVBKfs+_sOr9hepNQ@mail.gmail.com >> >> Simon included a short starter list of potentials which might be in >> that category: >> >> * SQL compliant identifiers >> * Remove RULEs >> * Change recovery.conf >> * Change block headers >> * Retire template0, template1 >> * Optimise FSM >> * Add heap metapage >> * Alter tuple headers >> et al >> >> This still is better placed on -hackers though, so lets have the >> conversation here to figure out if a "backwards compatibility breaking" >> release really is needed or not. > > A couple of points here: > *) I don't think having a version number that starts with 10 instead > of 9 magically fixes backwards compatibility problems and I think > that's a dangerous precedent to set unless we're willing to fork > development and support version 9 indefinitely including major release > versions. > > *) Compatibility issues at the SQL level have to be taken much more > seriously than other things (like internal layouts or .conf issues). > > *) We need to do an honest cost benefit analysis before breaking > things. Code refactors placed on your users puts an enormous cost > that is often underestimated. I have some fairly specific examples of > the costs related to the text cast removal for example. It's not a > pretty picture. Yeah. Moving the discussion here was more to determine which items really would need a backwards compatible break. eg no other approach can be found. Seems I started it off badly, as no-one's yet jumped in to discuss the initial points. :( + Justin -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
On Tue, Apr 12, 2016 at 12:32 PM, Justin Clift <justin@postgresql.org> wrote: > Yeah. Moving the discussion here was more to determine which items > really would need a backwards compatible break. eg no other approach can > be found. > > Seems I started it off badly, as no-one's yet jumped in to discuss the > initial points. :( I'm going to throw down the gauntlet (again) and say more or less what I previously said on the pgsql-advocacy thread. I think that: 1. Large backward compatibility breaks are bad. Therefore, if any of these things are absolutely impossible to do without major compatibility breaks, we shouldn't do them at all. 2. Small backward compatibility breaks are OK, but don't require doing anything special to the version number. 3. There's no value in aggregating many small backward compatibility breaks into a single release. That increases pain for users, rather than decreasing it, and slows down development, too, because you have to wait for the special magic release where it's OK to hose users. We typically have a few small backward compatibility breaks in each release, and that's working fine, so I see little reason to change it. 4. To the extent that I can guess what the things on Simon's list means from what he wrote, and that's a little difficult because his descriptions were very short, I think that everything on that list is either (a) a bad idea or (b) something that we can do without any compatibility break at all. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 04/12/2016 10:43 AM, Robert Haas wrote: > 1. Large backward compatibility breaks are bad. Therefore, if any of > these things are absolutely impossible to do without major > compatibility breaks, we shouldn't do them at all. +1 > 2. Small backward compatibility breaks are OK, but don't require doing > anything special to the version number. +1 > 3. There's no value in aggregating many small backward compatibility > breaks into a single release. That increases pain for users, rather > than decreasing it, and slows down development, too, because you have > to wait for the special magic release where it's OK to hose users. We > typically have a few small backward compatibility breaks in each > release, and that's working fine, so I see little reason to change it. +1 > 4. To the extent that I can guess what the things on Simon's list > means from what he wrote, and that's a little difficult because his > descriptions were very short, I think that everything on that list is > either (a) a bad idea or (b) something that we can do without any > compatibility break at all. +1 Here's the features I can imagine being worth major backwards compatibility breaks: 1. Fully pluggable storage with a clean API. 2. Total elimination of VACUUM or XID freezing 3. Fully transparent-to-the user MM replication/clustering or sharding. 4. Perfect partitioning (i.e. transparent to the user, supports keys & joins, supports expressions on partition key, etc.) 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables without pg_upgrade or other modification). 6. Fully pluggable parser/executor with a good API That's pretty much it. I can't imagine anything else which would justify imposing a huge upgrade barrier on users. And, I'll point out, that in the above list: * nobody is currently working on anything in core except #4. * we don't *know* that any of the above items will require a backwards compatibility break. People keep talking about "we might want to break compatibility/file format one day". But nobody is working on anything which will and justifies it. -- -- Josh Berkus Red Hat OSAS (any opinions are my own)
On 2016-04-12 11:25:21 -0700, Josh berkus wrote: > Here's the features I can imagine being worth major backwards > compatibility breaks: > > 1. Fully pluggable storage with a clean API. > > 2. Total elimination of VACUUM or XID freezing > > 3. Fully transparent-to-the user MM replication/clustering or sharding. > > 4. Perfect partitioning (i.e. transparent to the user, supports keys & > joins, supports expressions on partition key, etc.) > > 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables > without pg_upgrade or other modification). > > 6. Fully pluggable parser/executor with a good API > > That's pretty much it. I can't imagine anything else which would > justify imposing a huge upgrade barrier on users. And, I'll point out, > that in the above list: > > * nobody is currently working on anything in core except #4. > > * we don't *know* that any of the above items will require a backwards > compatibility break. none but 2) seem likely to require a substantial compatibility break. Greetings, Andres Freund
On Tue, Apr 12, 2016 at 2:27 PM, Andres Freund <andres@anarazel.de> wrote: > none but 2) seem likely to require a substantial compatibility break. And even that doesn't require one, if you keep the only system around and make the new system optional via some sort of pluggable storage API. Which, to me, seems like the only sensible approach from a development perspective. If you decide to rip out the entire heapam and replace it with something new in one fell swoop, you might as well bother not writing the patch. It's got the same chance of being accepted either way. I really think the time has come that we need an API for the heap the same way we already have for indexes. Regardless of exactly how we choose to implement that, I think a large part of it will end up looking similar to what we already have for FDWs. We can either use the FDW API itself and add whatever additional methods we need for this purpose, or copy it to a new file, rename everything, and have two slightly different versions. AFAICS, the things we need that the FDW API doesn't currently provide are: 1. The ability to have a local relfilenode associated with the data. Or, ideally, several, so you have a separate set of files for each index. 2. The ability to WAL-log changes to that relfilenode (or those relfilenodes) in a sensible way. Not sure whether the new generic XLOG stuff is good enough for a first go-round here or if more is needed. 3. The ability to intercept DDL commands directed at the table and handle them in some arbitrary way. This is really optional; people could always provide a function-based API until we devise something better. 4. The ability to build standard PostgreSQL indexes on top of the data, if the underlying format still has a useful notion of CTIDs. That is, if the underlying format is basically like our heap format, but optimized in some way - e.g. append-only table that can't update or delete with a smaller tuple header and page compression - then it can reuse our indexing. If it does something else, like an index-organized table where rows can move around to different physical positions, then it has to provide its own indexing facilities. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Tue, Apr 12, 2016 at 9:25 PM, Josh berkus <josh@agliodbs.com> wrote:
Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We don't have clear roadmap and that's why we cannot plan future feature full release. There are several postgres-centric companies, which have most of developers, who do all major contributions. All these companies has their roadmaps, but not the community. I think 9.6 release is inflection point, where we should combine our roadmaps and release the one for the community. Than we could plan releases and our customers will see what to expect. I can't say for other companies, but we have big demand for many features from russian customers and we have to compete with other databases. Having community roadmap will helps us to work with customers and plan our resources.
On 04/12/2016 10:43 AM, Robert Haas wrote:
> 1. Large backward compatibility breaks are bad. Therefore, if any of
> these things are absolutely impossible to do without major
> compatibility breaks, we shouldn't do them at all.
+1
> 2. Small backward compatibility breaks are OK, but don't require doing
> anything special to the version number.
+1
> 3. There's no value in aggregating many small backward compatibility
> breaks into a single release. That increases pain for users, rather
> than decreasing it, and slows down development, too, because you have
> to wait for the special magic release where it's OK to hose users. We
> typically have a few small backward compatibility breaks in each
> release, and that's working fine, so I see little reason to change it.
+1
> 4. To the extent that I can guess what the things on Simon's list
> means from what he wrote, and that's a little difficult because his
> descriptions were very short, I think that everything on that list is
> either (a) a bad idea or (b) something that we can do without any
> compatibility break at all.
+1
Here's the features I can imagine being worth major backwards
compatibility breaks:
1. Fully pluggable storage with a clean API.
2. Total elimination of VACUUM or XID freezing
3. Fully transparent-to-the user MM replication/clustering or sharding.
4. Perfect partitioning (i.e. transparent to the user, supports keys &
joins, supports expressions on partition key, etc.)
5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
without pg_upgrade or other modification).
6. Fully pluggable parser/executor with a good API
That's pretty much it. I can't imagine anything else which would
justify imposing a huge upgrade barrier on users. And, I'll point out,
that in the above list:
* nobody is currently working on anything in core except #4.
We are working on #3 (HA multimaster).
* we don't *know* that any of the above items will require a backwards
compatibility break.
People keep talking about "we might want to break compatibility/file
format one day". But nobody is working on anything which will and
justifies it.
Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We don't have clear roadmap and that's why we cannot plan future feature full release. There are several postgres-centric companies, which have most of developers, who do all major contributions. All these companies has their roadmaps, but not the community. I think 9.6 release is inflection point, where we should combine our roadmaps and release the one for the community. Than we could plan releases and our customers will see what to expect. I can't say for other companies, but we have big demand for many features from russian customers and we have to compete with other databases. Having community roadmap will helps us to work with customers and plan our resources.
--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 04/12/2016 01:07 PM, Oleg Bartunov wrote: > > Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. > We don't have clear roadmap and that's why we cannot plan future feature > full release. As someone who's worked at multiple proprietary software companies, having a roadmap doesn't magically make code happen. > There are several postgres-centric companies, which have > most of developers, who do all major contributions. All these companies > has their roadmaps, but not the community. I think 9.6 release is > inflection point, where we should combine our roadmaps and release the > one for the community. Than we could plan releases and our customers > will see what to expect. I can't say for other companies, but we have > big demand for many features from russian customers and we have to > compete with other databases. Having community roadmap will helps us to > work with customers and plan our resources. It would be good to have a place for the companies who do PostgreSQL feature work would publish their current efforts and timelines, so we at least have a go-to place for "here's what someone's working on". But only if that information is going to be *updated*, something we're very bad at. And IMHO, a "roadmap" which is less that 50% accurate is a waste of time. There's an easy way for you to kick this off though: have PostgresPro publish a wiki page or Trello board or github repo or whatever with your roadmap and invite other full-time PostgreSQL contributors to add their pieces. -- -- Josh Berkus Red Hat OSAS (any opinions are my own)
Re: Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0
From
"David G. Johnston"
Date:
* we don't *know* that any of the above items will require a backwards
compatibility break.
People keep talking about "we might want to break compatibility/file
format one day". But nobody is working on anything which will and
justifies it.
Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We don't have clear roadmap and that's why we cannot plan future feature full release. There are several postgres-centric companies, which have most of developers, who do all major contributions. All these companies has their roadmaps, but not the community. I think 9.6 release is inflection point, where we should combine our roadmaps and release the one for the community. Than we could plan releases and our customers will see what to expect. I can't say for other companies, but we have big demand for many features from russian customers and we have to compete with other databases. Having community roadmap will helps us to work with customers and plan our resources.
I've already posited just having our release numbers operate on 5-year increments 10.0 - 10.4; 11.0 - 11.4, etc on advocacy but was met with silence. In any case this comment is just furtherance of the tail wagging the dog. I see no fundamental reason to have to plan something momentous enough, and actively schedule work to meet the plan, in order to justify a 10.0 release.
There is a bunch of hand-waving here, and its an environment I'm not immersed in, but it seems that creating a roadmap today is tantamount to waterfall design - useful in moderation but has largely shown to be undesirable at scale. Aside from the 1-year release cycle the project is reasonably agile and well receptive to outside observation, questions, and contributions. If you have spare resources you need to keep busy just ask how you can help. To be honest the community would likely rather have those people help review and test everything that is presently in-progress - future goals on a roadmap are not nearly as important. And if you have a demand you think needs to be fulfill put the information out there and get input.
If you are claiming the balance between community and profit is skewed undesirably you will need to put forth a more concrete argument in order to convince me. For me, having plans reside in the profit-motive parts of the community and having core simply operate openly seems to strike a solid balance.
I give a solid +10 to Robert's opinions on the matter and aside from figuring out if and how to fit first-number versioning dynamics into our release policies I think the community is doing a sufficient job on the communication and planning front. The biggest resource need is quality control. I dislike the fact that we are currently in a situation where the first 3 point releases each year are considered "live betas" based especially on both 9.3 and 9.5 post-release significant bug counts.
David J.
On Tue, Apr 12, 2016 at 4:07 PM, Oleg Bartunov <obartunov@gmail.com> wrote: > Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We > don't have clear roadmap and that's why we cannot plan future feature full > release. There are several postgres-centric companies, which have most of > developers, who do all major contributions. All these companies has their > roadmaps, but not the community. I think 9.6 release is inflection point, > where we should combine our roadmaps and release the one for the community. > Than we could plan releases and our customers will see what to expect. I > can't say for other companies, but we have big demand for many features from > russian customers and we have to compete with other databases. Having > community roadmap will helps us to work with customers and plan our > resources. I don't think it's realistic to plan what is going to go into a certain release. We don't know whether we want the patch until we see the patch and review it and decide whether it's good. We can't make the decision, first, that the patch will be in the release, and then, second, write the patch. What we *can* do, as Josh says, is discuss our plans. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Tue, Apr 12, 2016 at 4:32 PM, David G. Johnston <david.g.johnston@gmail.com> wrote: > I give a solid +10 to Robert's opinions on the matter and aside from > figuring out if and how to fit first-number versioning dynamics into our > release policies I think the community is doing a sufficient job on the > communication and planning front. The biggest resource need is quality > control. I dislike the fact that we are currently in a situation where the > first 3 point releases each year are considered "live betas" based > especially on both 9.3 and 9.5 post-release significant bug counts. /me blinks. I find it shocking that you would compare 9.5 to 9.3 that way. Yeah, we've had a few bugs in 9.5: in particular, it was disappointing to have to disable abbreviated keys. But I'm not sure that I really believe that affected massive numbers of users in a really negative way - many locales were just fine, and not every locale that had a problem with some string data necessarily had a problem with the strings people were actually storing. But we haven't eaten anybody's data, at least not beyond what can be fixed by a REINDEX, unless I missed something here. The fact is that this is a fairly hard problem to solve. Some bugs are not going to get found before people try the software, and we can't make them try it while it's in beta. We can only do our best to do good code review, but inevitably we will miss some things. As for your proposal that we blindly consider $(N+1).0 to follow $N.4, I'm not particularly enthralled with that. I think it's a good idea to look for a release that's got some particularly nifty feature(s) and use that as the time to move the first digit. And, sure, plan to have that happen every 4-6 years or so, but adjust based on what actually gets into which releases. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0
From
"David G. Johnston"
Date:
On Tue, Apr 12, 2016 at 4:32 PM, David G. Johnston
<david.g.johnston@gmail.com> wrote:
> I give a solid +10 to Robert's opinions on the matter and aside from
> figuring out if and how to fit first-number versioning dynamics into our
> release policies I think the community is doing a sufficient job on the
> communication and planning front. The biggest resource need is quality
> control. I dislike the fact that we are currently in a situation where the
> first 3 point releases each year are considered "live betas" based
> especially on both 9.3 and 9.5 post-release significant bug counts.
/me blinks.
I find it shocking that you would compare 9.5 to 9.3 that way. Yeah,
we've had a few bugs in 9.5: in particular, it was disappointing to
have to disable abbreviated keys. But I'm not sure that I really
believe that affected massive numbers of users in a really negative
way - many locales were just fine, and not every locale that had a
problem with some string data necessarily had a problem with the
strings people were actually storing. But we haven't eaten anybody's
data, at least not beyond what can be fixed by a REINDEX, unless I
missed something here.
I probably over-implied my feelings regarding 9.5 since, yes, abbreviated keys was largely out of realm of reasonable expectation.
I think I am colored by my involvement in attempting to help research this one:
9.5.1 "Fix an oversight that caused hash joins to miss joining to some tuples of the inner relation in rare cases (Tomas Vondra, Tom Lane)"
This one struck me as well for some reason...
9.5.1 "Fix overeager pushdown of HAVING clauses when grouping sets are used."
The two ROW() comparison fixes for 9.5.2
I'm not exactly someone looking to poke a stick in PostgreSQL's side so regardless of degree of "validity" of my feelings that fact that I have them is likely informative. Or I'm just in a particularly overly-sensitive mood right now - I wouldn't fully discount the possibility.
The fact is that this is a fairly hard problem to solve. Some bugs
are not going to get found before people try the software, and we
can't make them try it while it's in beta. We can only do our best to
do good code review, but inevitably we will miss some things.
Agreed, and to the point of using corporate resources for improvement of existing work as opposed to roadmap stuff.
As for your proposal that we blindly consider $(N+1).0 to follow $N.4,
I'm not particularly enthralled with that. I think it's a good idea
to look for a release that's got some particularly nifty feature(s)
and use that as the time to move the first digit. And, sure, plan to
have that happen every 4-6 years or so, but adjust based on what
actually gets into which releases.
The main point on that post was we emphasize not only the new stuff in the just released version but that we re-celebrate everything that has been accomplished in the previous 4 releases as well. If the first-digit is getting such significant attention we might as well play to that fact and try and remind people who've missed out on prior releases that we are continually innovating. That, and the fact that the last N.0 release that was so highly touted just went out of support.
Otherwise I don't see why we don't just start increment the first-digit yearly. Sure, every so often we think we've done enough to warrant an increase there but the philosophy as a community doesn't actually match our particular choice - we don't, in advance, place any special importance on the first-digit releases, as evidenced by this discussion.
David J.
On Tue, Apr 12, 2016 at 01:43:41PM -0400, Robert Haas wrote: > I'm going to throw down the gauntlet (again) and say more or less what > I previously said on the pgsql-advocacy thread. I think that: > > 1. Large backward compatibility breaks are bad. Therefore, if any of > these things are absolutely impossible to do without major > compatibility breaks, we shouldn't do them at all. > > 2. Small backward compatibility breaks are OK, but don't require doing > anything special to the version number. > > 3. There's no value in aggregating many small backward compatibility > breaks into a single release. That increases pain for users, rather > than decreasing it, and slows down development, too, because you have > to wait for the special magic release where it's OK to hose users. We > typically have a few small backward compatibility breaks in each > release, and that's working fine, so I see little reason to change it. Well, this is true for SQL-level and admin-level changes, but it does make sense to group pg_upgrade breaks into a single release. I think the plan is for us to have logical replication usable before we make such a change. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Tue, Apr 12, 2016 at 11:25:21AM -0700, Josh Berkus wrote: > Here's the features I can imagine being worth major backwards > compatibility breaks: ... > 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables > without pg_upgrade or other modification). Technically, this is exactly what pg_upgrade does. I think what you really mean is for the backend binary to be able to read the system tables and WAL files of the old clusters --- something I can't see us implementing anytime soon. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On 04/29/2016 08:32 AM, Bruce Momjian wrote: > On Tue, Apr 12, 2016 at 11:25:21AM -0700, Josh Berkus wrote: >> Here's the features I can imagine being worth major backwards >> compatibility breaks: > ... >> 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables >> without pg_upgrade or other modification). > > Technically, this is exactly what pg_upgrade does. I think what you > really mean is for the backend binary to be able to read the system > tables and WAL files of the old clusters --- something I can't see us > implementing anytime soon. > For the most part, pg_upgrade is good enough. There are exceptions and it does need a more thorough test suite but as a whole, it works. As nice as being able to install 9.6 right on top of 9.5 and have 9.6 magically work, it is certainly not a *requirement* anymore. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote: > Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We > don't have clear roadmap and that's why we cannot plan future feature full > release. There are several postgres-centric companies, which have most of > developers, who do all major contributions. All these companies has their > roadmaps, but not the community. I would be concerned if company roadmaps overtly affected the community roadmap. In general, I find company roadmaps to be very short-sighted and quickly changed based on the demands of specific users/customers --- something we don't want to imitate. We do want company roadmaps to affect the community roadmap, but in a healthy, long-term way, and I think, in general, that is happening. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Fri, Apr 29, 2016 at 08:37:57AM -0700, Joshua Drake wrote: > >Technically, this is exactly what pg_upgrade does. I think what you > >really mean is for the backend binary to be able to read the system > >tables and WAL files of the old clusters --- something I can't see us > >implementing anytime soon. > > > > For the most part, pg_upgrade is good enough. There are exceptions and it > does need a more thorough test suite but as a whole, it works. As nice as > being able to install 9.6 right on top of 9.5 and have 9.6 magically work, > it is certainly not a *requirement* anymore. Yes, the trick would be making the new 9.6 features work with the existing 9.5 system tables that don't know about the 9.6 features. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On 12 April 2016 at 20:25, Josh berkus <josh@agliodbs.com> wrote:
--
Here's the features I can imagine being worth major backwards
compatibility breaks:
1. Fully pluggable storage with a clean API.
2. Total elimination of VACUUM or XID freezing
3. Fully transparent-to-the user MM replication/clustering or sharding.
4. Perfect partitioning (i.e. transparent to the user, supports keys &
joins, supports expressions on partition key, etc.)
5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
without pg_upgrade or other modification).
6. Fully pluggable parser/executor with a good API
That's pretty much it. I can't imagine anything else which would
justify imposing a huge upgrade barrier on users. And, I'll point out,
that in the above list:
* nobody is currently working on anything in core except #4.
* we don't *know* that any of the above items will require a backwards
compatibility break.
People keep talking about "we might want to break compatibility/file
format one day". But nobody is working on anything which will and
justifies it.
Of your list, I know 2ndQuadrant developers are working on 1, 3, 5.
6 has being discussed recently on list by other hackers.
I'm not really sure what 2 consists of; presumably this means "take the pain away" rather than removal of MVCC, which is the root cause of those secondary effects.
I don't think the current focus on manually intensive DDL partitioning is the right way forwards. I did once; I don't now.
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Fri, Apr 29, 2016 at 11:02 AM, Simon Riggs <simon@2ndquadrant.com> wrote: > On 12 April 2016 at 20:25, Josh berkus <josh@agliodbs.com> wrote: > >> >> Here's the features I can imagine being worth major backwards >> compatibility breaks: >> >> 1. Fully pluggable storage with a clean API. >> >> 2. Total elimination of VACUUM or XID freezing >> >> 3. Fully transparent-to-the user MM replication/clustering or sharding. >> >> 4. Perfect partitioning (i.e. transparent to the user, supports keys & >> joins, supports expressions on partition key, etc.) >> >> 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables >> without pg_upgrade or other modification). >> >> 6. Fully pluggable parser/executor with a good API >> >> That's pretty much it. I can't imagine anything else which would >> justify imposing a huge upgrade barrier on users. And, I'll point out, >> that in the above list: >> >> * nobody is currently working on anything in core except #4. >> >> * we don't *know* that any of the above items will require a backwards >> compatibility break. >> >> People keep talking about "we might want to break compatibility/file >> format one day". But nobody is working on anything which will and >> justifies it. > > Of your list, I know 2ndQuadrant developers are working on 1, 3, 5. > 6 has being discussed recently on list by other hackers. #5 (upgrade without pg_upgrade or dump/restore) from my perspective would be the most useful feature of all time, and would justify the 9.x to 10.x all by itself using the existing standard (I think?) of major project milestones to advance that number. 7.x ?? (just coming on the scene then) 8.x windows, pitr 9.x replication 10.x easy upgrades merlin
On 04/29/2016 08:44 AM, Bruce Momjian wrote: > On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote: >> Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We >> don't have clear roadmap and that's why we cannot plan future feature full >> release. There are several postgres-centric companies, which have most of >> developers, who do all major contributions. All these companies has their >> roadmaps, but not the community. > > I would be concerned if company roadmaps overtly affected the community > roadmap. In general, I find company roadmaps to be very short-sighted > and quickly changed based on the demands of specific users/customers --- > something we don't want to imitate. > > We do want company roadmaps to affect the community roadmap, but in a > healthy, long-term way, and I think, in general, that is happening. > The roadmap is not the problem it is the lack of cooperation. Many companies are now developing features in a silo and then presenting them to the community. Instead we should be working with those companies to have them develop transparently so others can be a part of the process. If the feature is going to be submitted to core anyway (or open source) why wouldn't we just do that? Why wouldn't EDB develop directly within the Pg infrastructure. Why wouldn't we build teams around the best and brightest between EDB, 2Q and Citus? Egos. Consider PgLogical, who is working on this outside of 2Q? Where is the git repo for it? Where is the bug tracker? Where is the mailing list? Oh, its -hackers, except that it isn't, is it? It used to be that everyone got together and worked together before the patch review process. Now it seems like it is a competition between companies to see whose ego can get the most inflated via press releases because they developed X for Y. If the companies were to come together and truly recognize that profit is the reward not the goal then our community would be much stronger for it. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On 04/29/2016 09:40 AM, Joshua D. Drake wrote: > On 04/29/2016 08:44 AM, Bruce Momjian wrote: > Consider PgLogical, who is working on this outside of 2Q? Where is the > git repo for it? Where is the bug tracker? Where is the mailing list? > Oh, its -hackers, except that it isn't, is it? > FTR: I am not attacking any one entity here. It is just that PgLogical is a good example of my point. I certainly recognize and have publicly applauded on multiple occasions the good work 2Q is doing. Just as I have with EDB, Citus and Crunchy. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On 29 April 2016 at 18:40, Joshua D. Drake <jd@commandprompt.com> wrote:
--
On 04/29/2016 08:44 AM, Bruce Momjian wrote:On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote:Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We
don't have clear roadmap and that's why we cannot plan future feature full
release. There are several postgres-centric companies, which have most of
developers, who do all major contributions. All these companies has their
roadmaps, but not the community.
I would be concerned if company roadmaps overtly affected the community
roadmap. In general, I find company roadmaps to be very short-sighted
and quickly changed based on the demands of specific users/customers ---
something we don't want to imitate.
We do want company roadmaps to affect the community roadmap, but in a
healthy, long-term way, and I think, in general, that is happening.
The roadmap is not the problem it is the lack of cooperation. Many companies are now developing features in a silo and then presenting them to the community. Instead we should be working with those companies to have them develop transparently so others can be a part of the process.
If the feature is going to be submitted to core anyway (or open source) why wouldn't we just do that? Why wouldn't EDB develop directly within the Pg infrastructure. Why wouldn't we build teams around the best and brightest between EDB, 2Q and Citus?
Egos.
Consider PgLogical, who is working on this outside of 2Q?
Thank you for volunteering to assist. What would you like to work on?
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/29/2016 11:36 AM, Simon Riggs wrote: > Egos. > > Consider PgLogical, who is working on this outside of 2Q? > > > Thank you for volunteering to assist. What would you like to work on? You are very welcome. I have been testing as you know. I would be happy to continue that and also was going to look into having a subscriber validate if it is connecting to a subscribed node or not which is the error I ran into. I am also interested in creating user docs (versus reference docs). Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On 04/29/2016 11:50 AM, Joshua D. Drake wrote: > On 04/29/2016 11:36 AM, Simon Riggs wrote: > >> Egos. >> >> Consider PgLogical, who is working on this outside of 2Q? >> >> >> Thank you for volunteering to assist. What would you like to work on? > > You are very welcome. I have been testing as you know. I would be happy > to continue that and also was going to look into having a subscriber > validate if it is connecting to a subscribed node or not which is the > error I ran into. > > I am also interested in creating user docs (versus reference docs). So what do you think Simon? How about we put pgLogical under community infrastructure, use the postgresql redmine to track bugs, feature requests and documentation? I guarantee resources from CMD and I bet we could get others to participate as well. Let's turn this into an awesome community driven extension. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
I cut many of emails from CC - AFAIR most of you are subscribed to pg-hackers. Dnia 2016-04-30 19:29 Joshua D. Drake napisał(a): >On 04/29/2016 11:50 AM, Joshua D. Drake wrote: >> On 04/29/2016 11:36 AM, Simon Riggs wrote: >> >>> Egos. >>> >>> Consider PgLogical, who is working on this outside of 2Q? >>> >>> >>> Thank you for volunteering to assist. What would you like to work on? >> >> You are very welcome. I have been testing as you know. I would be happy >> to continue that and also was going to look into having a subscriber >> validate if it is connecting to a subscribed node or not which is the >> error I ran into. >> >> I am also interested in creating user docs (versus reference docs). > >So what do you think Simon? How about we put pgLogical under community >infrastructure, use the postgresql redmine to track bugs, feature >requests and documentation? > >I guarantee resources from CMD and I bet we could get others to >participate as well. Let's turn this into an awesome community driven >extension. > I reviewed pglogical-output extension in CF 2016-01. It was the only patch I reviewed - it was quite big and as I was doing it afternoons and not during work (other responsibilities) it took me more than half of month. And while initially response was good, after final review there was no response, no new patch version - also nothing in CF 2016-03. At the same time I've seen that pglogical got some love in March - but I'm not sure whether it is usable without *-output plugin. On the one hand splitting those two makes review easier, or at least manageable. OTOH one does not make much sense without the other. I can see that, at least in theory, pglogical-output could be used independently, but at the same time, its main user will be pglogical proper. So, in summary - slightly better management and communication regarding features and patches (not only this one; this is just the patch with which I tried to do review) would be beneficial. For now I'm not sure what is going on with pglogical, and whether my review even mattered. Best regards Tomasz Rybak
On Fri, Apr 29, 2016 at 7:40 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
On 04/29/2016 08:44 AM, Bruce Momjian wrote:On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote:Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We
don't have clear roadmap and that's why we cannot plan future feature full
release. There are several postgres-centric companies, which have most of
developers, who do all major contributions. All these companies has their
roadmaps, but not the community.
I would be concerned if company roadmaps overtly affected the community
roadmap. In general, I find company roadmaps to be very short-sighted
and quickly changed based on the demands of specific users/customers ---
something we don't want to imitate.
We do want company roadmaps to affect the community roadmap, but in a
healthy, long-term way, and I think, in general, that is happening.
The roadmap is not the problem it is the lack of cooperation. Many companies are now developing features in a silo and then presenting them to the community. Instead we should be working with those companies to have them develop transparently so others can be a part of the process.
We are working on our roadmap to have it in form to be presented to the community. I think we'll publish it somewhere in wiki.
If the feature is going to be submitted to core anyway (or open source) why wouldn't we just do that? Why wouldn't EDB develop directly within the Pg infrastructure. Why wouldn't we build teams around the best and brightest between EDB, 2Q and Citus?
This is what I suggested. Features considered to be open source could be discussed and developed together.
Egos.
Consider PgLogical, who is working on this outside of 2Q? Where is the git repo for it? Where is the bug tracker? Where is the mailing list? Oh, its -hackers, except that it isn't, is it?
It used to be that everyone got together and worked together before the patch review process. Now it seems like it is a competition between companies to see whose ego can get the most inflated via press releases because they developed X for Y.
git log says better than any press releases :)
If the companies were to come together and truly recognize that profit is the reward not the goal then our community would be much stronger for it.
I'd not limited by the companies, individual developes are highly welcome. I'm afraid there are some.
Sincerely,
JD
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
<p dir="ltr"><br /> On Apr 30, 2016 2:07 PM, Oleg Bartunov <obartunov@gmail.com> wrote:<br /> ><br /> ><br />><br /> > On Fri, Apr 29, 2016 at 7:40 PM, Joshua D. Drake <jd@commandprompt.com> wrote:<br /> >><br/> > I'd not limited by the companies, individual developes are highly welcome. I'm afraid there are some.<br/> > <p dir="ltr">Oh, absolutely. I was just pointing out how a lot of companies are hoarding talent internallyfor no productive purpose.<p dir="ltr">Jd<p dir="ltr">>><br /> >><br /> >><br /> >> Sincerely,<br/> >><br /> >> JD<br /> >><br /> >><br /> >> -- <br /> >> Command Prompt,Inc. http://the.postgres.company/<br /> >> +1-503-667-4564<br /> >>PostgreSQL Centered full stack support, consulting and development.<br /> >> Everyone appreciates your honesty,until you are honest with them.<br /> ><br /> ><br />
On Sat, Apr 30, 2016 at 8:46 PM, Joshua Drake <jd@commandprompt.com> wrote: > Oh, absolutely. I was just pointing out how a lot of companies are hoarding > talent internally for no productive purpose. Wow, really? I disagree both with the idea that this is happening and with your characterization of it. First, there are lots of people contributing code to PostgreSQL right now. To look at the just the last CommitFest, we've got multiple people from all of Crunchy Data, 2ndQuadrant, EnterpriseDB, Postgres Pro, and NTT; plus Julien Rouhaud from Dalibo and Peter Geoghegan at Heroku and Michael Paquier at VMware, among many others. I'm not sure anyone at CommandPrompt submitted a patch, though. :-) Second, when people don't contribute as much as you think they should to the PostgreSQL community, I don't think that necessarily means their employer is doing something wrong. Sometimes, it may be the employee's choice to spend more time on consulting or support or whatever else they are doing; maybe that's what they like to do. At other times, it may be the thing that has to be done to pay the bills, and I think that's legitimate, too. People have a right to earning a living, and companies have to make money to keep paying their employees. Let's respect people and companies for what they do contribute, rather than labeling it as not good enough. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 05/13/2016 07:42 AM, Robert Haas wrote: > On Sat, Apr 30, 2016 at 8:46 PM, Joshua Drake <jd@commandprompt.com> wrote: >> Oh, absolutely. I was just pointing out how a lot of companies are hoarding >> talent internally for no productive purpose. > > Wow, really? > > I disagree both with the idea that this is happening and with your > characterization of it. First, there are lots of people contributing > code to PostgreSQL right now. To look at the just the last > CommitFest, we've got multiple people from all of Crunchy Data, > 2ndQuadrant, EnterpriseDB, Postgres Pro, and NTT; plus Julien Rouhaud > from Dalibo and Peter Geoghegan at Heroku and Michael Paquier at > VMware, among many others. Singular point contribution is not the point of my argument. My point is that if three people from EDB and three people from Citus got together and worked on a project in full collaboration it would be more beneficial to the project. > I'm not sure anyone at CommandPrompt > submitted a patch, though. :-) This is true. We have been a little busy doing a lot of things for the community that -hackers are generally not good at. > > Second, when people don't contribute as much as you think they should > to the PostgreSQL community, I don't think that necessarily means > their employer is doing something wrong. Sometimes, it may be the This is also true. > employee's choice to spend more time on consulting or support or > whatever else they are doing; maybe that's what they like to do. At > other times, it may be the thing that has to be done to pay the bills, > and I think that's legitimate, too. People have a right to earning a > living, and companies have to make money to keep paying their > employees. As someone in this community that is responsible for paying the bills including employee salaries, I fully agree with this. > > Let's respect people and companies for what they do contribute, rather > than labeling it as not good enough. > There was no disrespect intended. I was trying to push forth an idea that multi-company team collaboration is better for the community than single company team collaboration. I will stand by that assertion. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On Fri, May 13, 2016 at 09:12:23AM -0700, Joshua Drake wrote: > There was no disrespect intended. I was trying to push forth an idea that > multi-company team collaboration is better for the community than single > company team collaboration. I will stand by that assertion. Uh, we are already doing that. EDB and NTT are working on FDWs and sharding, PostgresPro and someone else is working on a transaction manager, and EDB and 2nd Quadrant worked on parallelism. What is the problem you are trying to solve? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On 05/13/2016 09:28 AM, Bruce Momjian wrote: > On Fri, May 13, 2016 at 09:12:23AM -0700, Joshua Drake wrote: >> There was no disrespect intended. I was trying to push forth an idea that >> multi-company team collaboration is better for the community than single >> company team collaboration. I will stand by that assertion. > > Uh, we are already doing that. EDB and NTT are working on FDWs and > sharding, PostgresPro and someone else is working on a transaction > manager, and EDB and 2nd Quadrant worked on parallelism. > > What is the problem you are trying to solve? Hey, if I am wrong that's awesome. The impression I have is the general workflow is this: * Company(1) discusses feature with community* Company(1) works on patch/feature for a period of time* Company(1) deliverspatch to community* Standard operation continues (patch review, discussion, etc..) This is not "bad" but it isn't as productive as something like this would be: * Company(1) + Company(2) get together and discuss using their respective resources to collaboratively develop X (multi-master for example). * Company(1) + Company(2) discuss feature with community* Company(1) + Company(2) work on patch/feature in the open, together*Company(1) + Company(2) deliver patch to community* Standard operation continues (patch review, discussion, etc...) The difference being one of coopetition versions competition for the betterment of the community. If there are companies that are doing that already, that is awesome and I applaud it. I was just trying to further drive that idea home. This on my end all sourced from this event: https://www.commandprompt.com/blogs/joshua_drake/2016/04/you_are_my_fellow_community_member/ That event is childish and a poor representation of what our community stands for: Excellent, Correctness, Inclusion, Collaboration. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On Fri, May 13, 2016 at 09:35:40AM -0700, Joshua Drake wrote: > On 05/13/2016 09:28 AM, Bruce Momjian wrote: > >On Fri, May 13, 2016 at 09:12:23AM -0700, Joshua Drake wrote: > >>There was no disrespect intended. I was trying to push forth an idea that > >>multi-company team collaboration is better for the community than single > >>company team collaboration. I will stand by that assertion. > > > >Uh, we are already doing that. EDB and NTT are working on FDWs and > >sharding, PostgresPro and someone else is working on a transaction > >manager, and EDB and 2nd Quadrant worked on parallelism. > > > >What is the problem you are trying to solve? > > Hey, if I am wrong that's awesome. The impression I have is the general > workflow is this: > > * Company(1) discusses feature with community > * Company(1) works on patch/feature for a period of time > * Company(1) delivers patch to community > * Standard operation continues (patch review, discussion, etc..) Yes, there are some cases of that. I assume it is due to efficiency and the belief that others aren't interested in helping. In a way is a company working on something alone different from a person working on a patch alone? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On 05/13/2016 09:40 AM, Bruce Momjian wrote: > On Fri, May 13, 2016 at 09:35:40AM -0700, Joshua Drake wrote: >> On 05/13/2016 09:28 AM, Bruce Momjian wrote: >>> On Fri, May 13, 2016 at 09:12:23AM -0700, Joshua Drake wrote: >>>> There was no disrespect intended. I was trying to push forth an idea that >>>> multi-company team collaboration is better for the community than single >>>> company team collaboration. I will stand by that assertion. >>> >>> Uh, we are already doing that. EDB and NTT are working on FDWs and >>> sharding, PostgresPro and someone else is working on a transaction >>> manager, and EDB and 2nd Quadrant worked on parallelism. >>> >>> What is the problem you are trying to solve? >> >> Hey, if I am wrong that's awesome. The impression I have is the general >> workflow is this: >> >> * Company(1) discusses feature with community >> * Company(1) works on patch/feature for a period of time >> * Company(1) delivers patch to community >> * Standard operation continues (patch review, discussion, etc..) > > Yes, there are some cases of that. I assume it is due to efficiency and > the belief that others aren't interested in helping. In a way is a > company working on something alone different from a person working on a > patch alone? No but I also think we should discourage that when reasonable as well. Obviously some patches just don't need more than one person but when we are talking about anything that is taking X time (month or more?) then we should actively encourage collaboration. That is all I am really talking about here. A more assertive collaboration for the betterment of the community. When I think about the size of the brain trust we have as a whole, I imagine the great things we could do even better. It isn't magical or overnight but a long term goal. Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > Singular point contribution is not the point of my argument. My point is > that if three people from EDB and three people from Citus got together and > worked on a project in full collaboration it would be more beneficial to the > project. Well, the scalability work in 9.6 went almost exactly like this, assuming you count Andres as three people (which is entirely reasonable) and Dilip, Mithun, Amit, and myself as three people (which is maybe less reasonable, since I don't really want any of us counted as less than a whole person). -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 05/13/2016 11:48 AM, Robert Haas wrote: > On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake <jd@commandprompt.com> wrote: >> Singular point contribution is not the point of my argument. My point is >> that if three people from EDB and three people from Citus got together and >> worked on a project in full collaboration it would be more beneficial to the >> project. > > Well, the scalability work in 9.6 went almost exactly like this, > assuming you count Andres as three people (which is entirely > reasonable) and Dilip, Mithun, Amit, and myself as three people (which > is maybe less reasonable, since I don't really want any of us counted > as less than a whole person). Frankly, PostgreSQL is practically a wonderland of inter-company collaboration. Yeah, there's some "does not play nice with others" which happens, but that's pretty much inevitable. Plus, it's also useful to have some companies go in different directions sometimes; the best approach to certain problems isn't always well defined. We might have a little more of that than is completely ideal, but it's rather hard to determine that. Anyway, all of this is a moot point, because nobody has the power to tell the various companies what to do. We're just lucky that everyone is still committed to writing stuff which adds to PostgreSQL. -- -- Josh Berkus Red Hat OSAS (any opinions are my own)
On Fri, May 13, 2016 at 12:35 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > Hey, if I am wrong that's awesome. The impression I have is the general > workflow is this: > > * Company(1) discusses feature with community > * Company(1) works on patch/feature for a period of time > * Company(1) delivers patch to community > * Standard operation continues (patch review, discussion, etc..) > > This is not "bad" but it isn't as productive as something like this would > be: > > * Company(1) + Company(2) get together and discuss using their > respective resources to collaboratively develop X (multi-master for > example). > > * Company(1) + Company(2) discuss feature with community > * Company(1) + Company(2) work on patch/feature in the open, > together > * Company(1) + Company(2) deliver patch to community > * Standard operation continues (patch review, discussion, etc...) > > The difference being one of coopetition versions competition for the > betterment of the community. If there are companies that are doing that > already, that is awesome and I applaud it. I was just trying to further > drive that idea home. I think that's already happening. I'm happy to see more of it. In practical terms, though, it's harder to collaborate between companies because then you need two management teams to be on-board with it, and there can be other competing priorities. If either company needs to pull staff of a project because of some competing priority (say, fixing a broken customer or addressing an urgent customer need), then the whole project can stall. The whole wagon train moves at the pace of the slowest camel. It's nice when we can collaborate across companies and I'm all for it, but sometimes it's faster to for a single company to just assign a couple of people to a project and tell them to go do it. Now, where this gets tricky is when it comes down to whether the end-product of that effort is something the community wants. We all need to be careful not to make our corporate priorities into community priorities. Features shouldn't get committed without a consensus that they are both useful and well-implemented, and prior discussion is a good way to achieve that. On the whole, I think we've done reasonably well in this area. There is often disagreement but in the end I think usually end up in a place that is good for PostgreSQL. Hopefully that will continue. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 05/13/2016 12:05 PM, Robert Haas wrote: > On Fri, May 13, 2016 at 12:35 PM, Joshua D. Drake <jd@commandprompt.com> wrote: >> Hey, if I am wrong that's awesome. The impression I have is the general >> workflow is this: >> The difference being one of coopetition versions competition for the >> betterment of the community. If there are companies that are doing that >> already, that is awesome and I applaud it. I was just trying to further >> drive that idea home. > > I think that's already happening. I'm happy to see more of it. In > practical terms, though, it's harder to collaborate between companies > because then you need two management teams to be on-board with it, and > there can be other competing priorities. Yep, that's true. > If either company needs to > pull staff of a project because of some competing priority (say, > fixing a broken customer or addressing an urgent customer need), then > the whole project can stall. The whole wagon train moves at the pace > of the slowest camel. It's nice when we can collaborate across > companies and I'm all for it, but sometimes it's faster to for a > single company to just assign a couple of people to a project and tell > them to go do it. > > Now, where this gets tricky is when it comes down to whether the > end-product of that effort is something the community wants. We all > need to be careful not to make our corporate priorities into community > priorities. Features shouldn't get committed without a consensus that > they are both useful and well-implemented, and prior discussion is a > good way to achieve that. On the whole, I think we've done reasonably > well in this area. There is often disagreement but in the end I think > usually end up in a place that is good for PostgreSQL. Hopefully that > will continue. > +1 Sincerely, JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On 05/13/2016 12:03 PM, Josh berkus wrote: > On 05/13/2016 11:48 AM, Robert Haas wrote: >> On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > Anyway, all of this is a moot point, because nobody has the power to > tell the various companies what to do. We're just lucky that everyone > is still committed to writing stuff which adds to PostgreSQL. Lucky? No. We earned it. We earned it through years and years of hard work. Should we be thankful? Absolutely. Should we be grateful that we have such a powerful and engaged commercial contribution base? 100%. JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On Fri, May 13, 2016 at 2:05 PM, Robert Haas <robertmhaas@gmail.com> wrote: > Now, where this gets tricky is when it comes down to whether the > end-product of that effort is something the community wants. We all > need to be careful not to make our corporate priorities into community > priorities. Features shouldn't get committed without a consensus that > they are both useful and well-implemented, and prior discussion is a > good way to achieve that. On the whole, I think we've done reasonably > well in this area. There is often disagreement but in the end I think > usually end up in a place that is good for PostgreSQL. Hopefully that > will continue. PostgreSQL is a miracle of good development practices. Companies can collaborate or contribute any manner the see fit along with the community at large. Demands made on others based on opinion and/of selfish benefit gets you nowhere fast and that's a good thing. Open source projects, especially the bigger/better run ones, tend to migrate towards development processes that are highly informal and great in terms of the ratio of effort to progress in all things, including and especially, collaboration. I regularly emulate the way things are done here as best I can which tends to completely freak out the more corporate type developers that run most of the bigger shops. If I'm writing emails around here, it's probably because five minutes earlier some developer refused to fix a bug because the resolution wasn't notated properly in some byzantine requirements document. Solace. merlin
On 05/13/2016 01:04 PM, Joshua D. Drake wrote: > On 05/13/2016 12:03 PM, Josh berkus wrote: >> On 05/13/2016 11:48 AM, Robert Haas wrote: >>> On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake >>> <jd@commandprompt.com> wrote: > >> Anyway, all of this is a moot point, because nobody has the power to >> tell the various companies what to do. We're just lucky that everyone >> is still committed to writing stuff which adds to PostgreSQL. > > Lucky? No. We earned it. We earned it through years and years of hard > work. Should we be thankful? Absolutely. Should we be grateful that we > have such a powerful and engaged commercial contribution base? 100%. Lucky. Sure there was work and personal integrity involved, but like any success story, there was luck. But we've also been fortunate in not spawning hostile-but-popular forks by people who left the project, and that none of the companies who created hostile forks were very successful with them, and that nobody has seriously tried using lawyers to control/ruin the project. And, most importantly, we've been lucky that a lot of competing projects have self-immolated instead of being successful and brain-draining our contributors (MySQL, ANTS, MonetDB, etc.) -- -- Josh Berkus Red Hat OSAS (any opinions are my own)
On 13 May 2016, at 21:42, Josh berkus <josh@agliodbs.com> wrote: > On 05/13/2016 01:04 PM, Joshua D. Drake wrote: >> On 05/13/2016 12:03 PM, Josh berkus wrote: >>> On 05/13/2016 11:48 AM, Robert Haas wrote: >>>> On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake >>>> <jd@commandprompt.com> wrote: >> >>> Anyway, all of this is a moot point, because nobody has the power to >>> tell the various companies what to do. We're just lucky that everyone >>> is still committed to writing stuff which adds to PostgreSQL. >> >> Lucky? No. We earned it. We earned it through years and years of hard >> work. Should we be thankful? Absolutely. Should we be grateful that we >> have such a powerful and engaged commercial contribution base? 100%. > > Lucky. Sure there was work and personal integrity involved, but like > any success story, there was luck. > > But we've also been fortunate in not spawning hostile-but-popular forks > by people who left the project, and that none of the companies who > created hostile forks were very successful with them, and that nobody > has seriously tried using lawyers to control/ruin the project. > > And, most importantly, we've been lucky that a lot of competing projects > have self-immolated instead of being successful and brain-draining our > contributors (MySQL, ANTS, MonetDB, etc.) Oracle buying MySQL (via Sun) seems to have helped things along pretty well too. :) + Justin -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
On 05/13/2016 01:42 PM, Josh berkus wrote: > On 05/13/2016 01:04 PM, Joshua D. Drake wrote: >> On 05/13/2016 12:03 PM, Josh berkus wrote: >>> On 05/13/2016 11:48 AM, Robert Haas wrote: >>>> On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake >>>> <jd@commandprompt.com> wrote: >> >>> Anyway, all of this is a moot point, because nobody has the power to >>> tell the various companies what to do. We're just lucky that everyone >>> is still committed to writing stuff which adds to PostgreSQL. >> >> Lucky? No. We earned it. We earned it through years and years of hard >> work. Should we be thankful? Absolutely. Should we be grateful that we >> have such a powerful and engaged commercial contribution base? 100%. > > Lucky. Sure there was work and personal integrity involved, but like > any success story, there was luck. > > But we've also been fortunate in not spawning hostile-but-popular forks > by people who left the project, and that none of the companies who > created hostile forks were very successful with them, and that nobody > has seriously tried using lawyers to control/ruin the project. I can't get behind you on this. Everything you have said above has to do with the hard work and integrity of the people in this project. It isn't luck or divine intervention. > > And, most importantly, we've been lucky that a lot of competing projects > have self-immolated instead of being successful and brain-draining our > contributors (MySQL, ANTS, MonetDB, etc.) > Actually there are people that have been drained out, I won't name them but it is pretty easy to figure out who they are. The people that are left, especially the long timers are here because of their integrity and attachment to the project. This project builds good people, and the good people build a good project. I am not going to buy into a luck story for something I and many others have invested decades of their life into. JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
Re: Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0
From
"David G. Johnston"
Date:
On 05/13/2016 01:42 PM, Josh berkus wrote:On 05/13/2016 01:04 PM, Joshua D. Drake wrote:On 05/13/2016 12:03 PM, Josh berkus wrote:On 05/13/2016 11:48 AM, Robert Haas wrote:On Fri, May 13, 2016 at 12:12 PM, Joshua D. Drake
<jd@commandprompt.com> wrote:Anyway, all of this is a moot point, because nobody has the power to
tell the various companies what to do. We're just lucky that everyone
is still committed to writing stuff which adds to PostgreSQL.
Lucky? No. We earned it. We earned it through years and years of hard
work. Should we be thankful? Absolutely. Should we be grateful that we
have such a powerful and engaged commercial contribution base? 100%.
Lucky. Sure there was work and personal integrity involved, but like
any success story, there was luck.
But we've also been fortunate in not spawning hostile-but-popular forks
by people who left the project, and that none of the companies who
created hostile forks were very successful with them, and that nobody
has seriously tried using lawyers to control/ruin the project.
I can't get behind you on this. Everything you have said above has to do with the hard work and integrity of the people in this project. It isn't luck or divine intervention.
And, most importantly, we've been lucky that a lot of competing projects
have self-immolated instead of being successful and brain-draining our
contributors (MySQL, ANTS, MonetDB, etc.)
Actually there are people that have been drained out, I won't name them but it is pretty easy to figure out who they are. The people that are left, especially the long timers are here because of their integrity and attachment to the project.
This project builds good people, and the good people build a good project.
I am not going to buy into a luck story for something I and many others have invested decades of their life into.
I'm not sure how this ended up a philosophical/religious argument but given that we are simply lucky that a meteor didn't wipe out our species 20 thousand years ago our present reality is a combination of determination and luck and trying to look at shorter term slices and assign weights to how much luck influenced something and how much was determination seems like something best done over drinks.
In the vein of "praise publicly, sold privately" if anyone here thinks that someone else should be acting differently I'd suggest sending them a private message to that effect. More useful would be to point out on these lists things you find are done well.
David J.
Re: Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0
From
"David G. Johnston"
Date:
On 05/13/2016 07:42 AM, Robert Haas wrote:On Sat, Apr 30, 2016 at 8:46 PM, Joshua Drake <jd@commandprompt.com> wrote:Oh, absolutely. I was just pointing out how a lot of companies are hoarding
talent internally for no productive purpose.
Wow, really?
[...]
Let's respect people and companies for what they do contribute, rather
than labeling it as not good enough.
There was no disrespect intended. I was trying to push forth an idea that multi-company team collaboration is better for the community than single company team collaboration. I will stand by that assertion.
Yeah, that failed...
I find it much easier to debate the assertion "multi-company team collaboration is better ..." than debating whether "companies are hoarding...".
I'll disagree - such a blanket statement is nearly worthless. Three one-person companies working together would not automatically produce better outputs than a single company dedicating 3 people to an endeavor. But even that retort requires me to cheat because I'm able to draw a conclusion about the fallacy of generalization without needing to be any more precise as to my concept of "better".
When it comes to the first statement nothing can be done to avoid the fact that no frame-of-reference has been established for "productive purpose".
Contrary to my earlier point the nature of our setup makes the whole "scold privately" aspect difficult - and maybe this approach works better than I can imagine. Though the down-thread from the quoted post does seem to indicate that considerable non-lethal friendly-fire is one of its byproducts.
David J.
On 4/29/16 10:37 AM, Joshua D. Drake wrote: >>> 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables >>> without pg_upgrade or other modification). >> >> Technically, this is exactly what pg_upgrade does. I think what you >> really mean is for the backend binary to be able to read the system >> tables and WAL files of the old clusters --- something I can't see us >> implementing anytime soon. >> > > For the most part, pg_upgrade is good enough. There are exceptions and > it does need a more thorough test suite but as a whole, it works. As > nice as being able to install 9.6 right on top of 9.5 and have 9.6 > magically work, it is certainly not a *requirement* anymore. My 2 issues with pg_upgrade are: 1) It's prone to bugs, because it operates at the binary level. This is similar to how it's MUCH easier to mess up PITR than pg_dump. Perhaps there's no way to avoid this. 2) There's no ability at all to revert, other than restore a backup. That means if you pull the trigger and discover some major performance problem, you have no choice but to deal with it (you can't switch back to the old version without losing data). For many users those issues just don't matter; but in my work with financial data it's why I've never actually used it. #2 especially was good to have (in our case, via londiste). It also made it a lot easier to find performance issues beforehand, by switching reporting replicas over to the new version first. One other consideration is cut-over time. Swapping logical master and replica can happen nearly instantly, while pg_upgrade needs some kind of outage window. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461
On Sun, May 15, 2016 at 03:23:52PM -0500, Jim Nasby wrote: > 2) There's no ability at all to revert, other than restore a backup. That > means if you pull the trigger and discover some major performance problem, > you have no choice but to deal with it (you can't switch back to the old > version without losing data). In --link mode only > For many users those issues just don't matter; but in my work with financial > data it's why I've never actually used it. #2 especially was good to have > (in our case, via londiste). It also made it a lot easier to find > performance issues beforehand, by switching reporting replicas over to the > new version first. > > One other consideration is cut-over time. Swapping logical master and > replica can happen nearly instantly, while pg_upgrade needs some kind of > outage window. Right. I am thinking of writing some docs about how to avoid downtime for upgrades of various types. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Mon, May 16, 2016 at 3:36 AM, Bruce Momjian <bruce@momjian.us> wrote: > On Sun, May 15, 2016 at 03:23:52PM -0500, Jim Nasby wrote: >> 2) There's no ability at all to revert, other than restore a backup. That >> means if you pull the trigger and discover some major performance problem, >> you have no choice but to deal with it (you can't switch back to the old >> version without losing data). > > In --link mode only No, not really. Once you let write transactions into the new cluster, there's no way to get back to the old server version no matter which option you used. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 5/16/16 2:36 AM, Bruce Momjian wrote: > Right. I am thinking of writing some docs about how to avoid downtime > for upgrades of various types. If there's some magic sauce to shrink pg_upgrade downtime to near 0 I think folks would be very interested in that. Outside of that scenario, I think what would be far more useful is information on how to do seamless master/replica switchovers using tools like pgBouncer or pgPool. That ability is useful *all* the time, not just when upgrading. It makes it trivial to do OS-level maintenance, and if you're using a form of logical replication it also makes it trivial to do expensive database maintenance, such as cluster/vacuum full/reindex. I've worked with a few clients that had that ability and it was a huge stress reducer. As a bonus, an unplanned outage of the master becomes far less stressful, because you already know exactly how to fail over. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461
On Fri, May 20, 2016 at 07:40:53PM -0400, Robert Haas wrote: > On Mon, May 16, 2016 at 3:36 AM, Bruce Momjian <bruce@momjian.us> wrote: > > On Sun, May 15, 2016 at 03:23:52PM -0500, Jim Nasby wrote: > >> 2) There's no ability at all to revert, other than restore a backup. That > >> means if you pull the trigger and discover some major performance problem, > >> you have no choice but to deal with it (you can't switch back to the old > >> version without losing data). > > > > In --link mode only > > No, not really. Once you let write transactions into the new cluster, > there's no way to get back to the old server version no matter which > option you used. Yes, there is, and it is documented: If you ran <command>pg_upgrade</command> <emphasis>without</> <option>--link</> or did not start the new server,the old cluster was not modified except that, if linking started, a <literal>.old</> suffix was appendedto <filename>$PGDATA/global/pg_control</>. To reuse the old cluster, possibly remove the <filename>.old</>suffix from <filename>$PGDATA/global/pg_control</>; you can then restart the old cluster. What is confusing you? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Tue, May 24, 2016 at 09:23:27AM -0500, Jim Nasby wrote: > On 5/16/16 2:36 AM, Bruce Momjian wrote: > >Right. I am thinking of writing some docs about how to avoid downtime > >for upgrades of various types. > > If there's some magic sauce to shrink pg_upgrade downtime to near 0 I think > folks would be very interested in that. > > Outside of that scenario, I think what would be far more useful is > information on how to do seamless master/replica switchovers using tools > like pgBouncer or pgPool. That ability is useful *all* the time, not just > when upgrading. It makes it trivial to do OS-level maintenance, and if > you're using a form of logical replication it also makes it trivial to do > expensive database maintenance, such as cluster/vacuum full/reindex. I've > worked with a few clients that had that ability and it was a huge stress > reducer. As a bonus, an unplanned outage of the master becomes far less > stressful, because you already know exactly how to fail over. pg_upgrade's runtime can't be decreased dramatically --- I wanted to document other methods like you described. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Mon, Jun 20, 2016 at 10:08 PM, Bruce Momjian <bruce@momjian.us> wrote: > On Fri, May 20, 2016 at 07:40:53PM -0400, Robert Haas wrote: >> On Mon, May 16, 2016 at 3:36 AM, Bruce Momjian <bruce@momjian.us> wrote: >> > On Sun, May 15, 2016 at 03:23:52PM -0500, Jim Nasby wrote: >> >> 2) There's no ability at all to revert, other than restore a backup. That >> >> means if you pull the trigger and discover some major performance problem, >> >> you have no choice but to deal with it (you can't switch back to the old >> >> version without losing data). >> > >> > In --link mode only >> >> No, not really. Once you let write transactions into the new cluster, >> there's no way to get back to the old server version no matter which >> option you used. > > Yes, there is, and it is documented: > > If you ran <command>pg_upgrade</command> <emphasis>without</> > <option>--link</> or did not start the new server, the > old cluster was not modified except that, if linking > started, a <literal>.old</> suffix was appended to > <filename>$PGDATA/global/pg_control</>. To reuse the old > cluster, possibly remove the <filename>.old</> suffix from > <filename>$PGDATA/global/pg_control</>; you can then restart the > old cluster. > > What is confusing you? I don't think I'm confused. Sure, you can do that, but the effects of any writes performed on the new cluster will not be there when you revert back to the old cluster. So you will have effectively lost data, unless you somehow have the ability to re-apply all of those write transactions somehow. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 21 June 2016 at 20:19, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jun 20, 2016 at 10:08 PM, Bruce Momjian <bruce@momjian.us> wrote:
> On Fri, May 20, 2016 at 07:40:53PM -0400, Robert Haas wrote:
>> On Mon, May 16, 2016 at 3:36 AM, Bruce Momjian <bruce@momjian.us> wrote:
>> > On Sun, May 15, 2016 at 03:23:52PM -0500, Jim Nasby wrote:
>> >> 2) There's no ability at all to revert, other than restore a backup. That
>> >> means if you pull the trigger and discover some major performance problem,
>> >> you have no choice but to deal with it (you can't switch back to the old
>> >> version without losing data).
>> >
>> > In --link mode only
>>
>> No, not really. Once you let write transactions into the new cluster,
>> there's no way to get back to the old server version no matter which
>> option you used.
>
> Yes, there is, and it is documented:
>
> If you ran <command>pg_upgrade</command> <emphasis>without</>
> <option>--link</> or did not start the new server, the
> old cluster was not modified except that, if linking
> started, a <literal>.old</> suffix was appended to
> <filename>$PGDATA/global/pg_control</>. To reuse the old
> cluster, possibly remove the <filename>.old</> suffix from
> <filename>$PGDATA/global/pg_control</>; you can then restart the
> old cluster.
>
> What is confusing you?
I don't think I'm confused. Sure, you can do that, but the effects of
any writes performed on the new cluster will not be there when you
revert back to the old cluster. So you will have effectively lost
data, unless you somehow have the ability to re-apply all of those
write transactions somehow.
Also, if you run *with* --link, IIRC there's no guarantee that the old version will be happy to see any new infomask bits etc introduced by the new Pg. I think there will also be issues with oid to relfilenode mappings in pg_class if the new cluster did any VACUUM FULLs or anything. It seems likely to be a bit risky to fall back on the old cluster once you've upgraded with --link . TBH it never even occurred to me that it'd be possible at all until you mentioned.
I always thought of pg_upgrade as a one-way no-going-back process either way, really. Either due to a fork in history (without --link) or due to possibly incompatible datadir changes (with --link).
On Tue, Jun 21, 2016 at 08:19:55AM -0400, Robert Haas wrote: > On Mon, Jun 20, 2016 at 10:08 PM, Bruce Momjian <bruce@momjian.us> wrote: > >> No, not really. Once you let write transactions into the new cluster, > >> there's no way to get back to the old server version no matter which > >> option you used. > > > > Yes, there is, and it is documented: > > > > If you ran <command>pg_upgrade</command> <emphasis>without</> > > <option>--link</> or did not start the new server, the > > old cluster was not modified except that, if linking > > started, a <literal>.old</> suffix was appended to > > <filename>$PGDATA/global/pg_control</>. To reuse the old > > cluster, possibly remove the <filename>.old</> suffix from > > <filename>$PGDATA/global/pg_control</>; you can then restart the > > old cluster. > > > > What is confusing you? > > I don't think I'm confused. Sure, you can do that, but the effects of > any writes performed on the new cluster will not be there when you > revert back to the old cluster. So you will have effectively lost > data, unless you somehow have the ability to re-apply all of those > write transactions somehow. Yes, that is true. I assume _revert_ means something really bad happened and you don't want those writes because they are somehow corrupt. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Tue, Jun 21, 2016 at 11:34 AM, Bruce Momjian <bruce@momjian.us> wrote: > On Tue, Jun 21, 2016 at 08:19:55AM -0400, Robert Haas wrote: >> On Mon, Jun 20, 2016 at 10:08 PM, Bruce Momjian <bruce@momjian.us> wrote: >> >> No, not really. Once you let write transactions into the new cluster, >> >> there's no way to get back to the old server version no matter which >> >> option you used. >> > >> > Yes, there is, and it is documented: >> > >> > If you ran <command>pg_upgrade</command> <emphasis>without</> >> > <option>--link</> or did not start the new server, the >> > old cluster was not modified except that, if linking >> > started, a <literal>.old</> suffix was appended to >> > <filename>$PGDATA/global/pg_control</>. To reuse the old >> > cluster, possibly remove the <filename>.old</> suffix from >> > <filename>$PGDATA/global/pg_control</>; you can then restart the >> > old cluster. >> > >> > What is confusing you? >> >> I don't think I'm confused. Sure, you can do that, but the effects of >> any writes performed on the new cluster will not be there when you >> revert back to the old cluster. So you will have effectively lost >> data, unless you somehow have the ability to re-apply all of those >> write transactions somehow. > > Yes, that is true. I assume _revert_ means something really bad > happened and you don't want those writes because they are somehow > corrupt. I think that it's pretty likely you could, say, upgrade to a new major release, discover that it has a performance problem or some other bug that causes a problem for you, and want to go back to the older release. There's not really an easy way to do that, because a pg_dump taken from the new system might not restore on the older one. Logical replication - e.g. Slony - can provide a way, but we don't have anything in core that can do it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Tue, Jun 21, 2016 at 08:56:09PM +0800, Craig Ringer wrote: > Also, if you run *with* --link, IIRC there's no guarantee that the old version > will be happy to see any new infomask bits etc introduced by the new Pg. I Well, we only write system tables in pg_upgrade in the new cluster, and those are not hard linked. As far as I know, we never write to anything we hard link from the old cluster. > think there will also be issues with oid to relfilenode mappings in pg_class if > the new cluster did any VACUUM FULLs or anything. It seems likely to be a bit pg_upgrade turns off all vacuums. > risky to fall back on the old cluster once you've upgraded with --link . TBH it > never even occurred to me that it'd be possible at all until you mentioned. Well, with --link, you can't start the old cluster, and that is documented, and pg_control is renamed to prevent accidental start. I think it is possible to start the old cluster before the new cluster is started. > I always thought of pg_upgrade as a one-way no-going-back process either way, > really. Either due to a fork in history (without --link) or due to possibly > incompatible datadir changes (with --link). Yes. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Tue, Jun 21, 2016 at 12:12:34PM -0400, Robert Haas wrote: > >> > What is confusing you? > >> > >> I don't think I'm confused. Sure, you can do that, but the effects of > >> any writes performed on the new cluster will not be there when you > >> revert back to the old cluster. So you will have effectively lost > >> data, unless you somehow have the ability to re-apply all of those > >> write transactions somehow. > > > > Yes, that is true. I assume _revert_ means something really bad > > happened and you don't want those writes because they are somehow > > corrupt. > > I think that it's pretty likely you could, say, upgrade to a new major > release, discover that it has a performance problem or some other bug > that causes a problem for you, and want to go back to the older > release. There's not really an easy way to do that, because a pg_dump > taken from the new system might not restore on the older one. Logical > replication - e.g. Slony - can provide a way, but we don't have > anything in core that can do it. Yes, there is data loss in a rollback to the old cluster, no question. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +