Re: Incremental results from libpq - Mailing list pgsql-interfaces
From | Goulet, Dick |
---|---|
Subject | Re: Incremental results from libpq |
Date | |
Msg-id | 4001DEAF7DF9BD498B58B45051FBEA6502EF559C@25exch1.vicorpower.vicr.com Whole thread Raw |
In response to | Incremental results from libpq (Scott Lamb <slamb@slamb.org>) |
Responses |
Re: Incremental results from libpq
|
List | pgsql-interfaces |
Tom, Your case for not supporting this is reasonable, at least to me. Personally I believe you should take one side or the other at the server level and then allow the app developer to use it as appropriate, so no argument here. But, there was a change in behavior introduced by Oracle in 10G that supports what was asked for by Trolltech. The optimizer was provided the "smarts" to determine if your query is best supported by a regular cursor or if a bulk collect in the background would be better. The end result is that the application behaves as normal, but the results are faster at getting back to it. What appears to be happening is that the database returns the first row as normal, but then continues collecting data rows and sequestering then off some where, probably the temp tablespace, until your ready for them. Appears to have driven the final coffin nail in the old "ORA-01555 Snapshot too old" error. Course since Postgresql doesn't have undo segments you don't have that problem. -----Original Message----- From: pgsql-interfaces-owner@postgresql.org [mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane Sent: Wednesday, November 16, 2005 9:24 AM To: Peter Eisentraut Cc: pgsql-interfaces@postgresql.org; Scott Lamb Subject: Re: [INTERFACES] Incremental results from libpq Peter Eisentraut <peter_e@gmx.net> writes: > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane: >> The main reason why libpq does what it does is that this way we do not >> have to expose in the API the notion of a command that fails part way >> through. > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over to talk to > me about this. He opined that it would be beneficial for their purpose (in > certain cases) if the server would first compute the entire result set and > keep it in the server memory (thus eliminating potential errors of the 1/x > kind) and then ship it to the client in a way that the client would be able > to fetch it piecewise. Then, the client application could build the display > incrementally while the rest of the result set travels over the (slow) link. > Does that make sense? Ick. That seems pretty horrid compared to the straight incremental-compute-and-fetch approach. Yes, it preserves the illusion that a SELECT is all-or-nothing, but at a very high cost, both in terms of absolute runtime and in terms of needing a new concept in the frontend protocol. It also doesn't solve the problem for people who need incremental fetch because they have a result set so large they don't want it materialized on either end of the wire. Furthermore, ISTM that any client app that's engaging in incremental fetches really has to deal with the failure-after-part-of-the-query-is-done problem anyway, because there's always a risk of failures on the client side or in the network connection. So I don't see any real gain in conceptual simplicity from adding this feature anyway. Note that if Trolltech really want this behavior, they can have it today --- it's called CREATE TEMP TABLE AS SELECT. It doesn't seem attractive enough to me to justify any further feature than that. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypesdo not match
pgsql-interfaces by date: