Re: Re: Proposal to fix Statement.executeBatch() - Mailing list pgsql-jdbc
From | Barry Lind |
---|---|
Subject | Re: Re: Proposal to fix Statement.executeBatch() |
Date | |
Msg-id | 3B9673B4.90602@xythos.com Whole thread Raw |
In response to | Proposal to fix Statement.executeBatch() (Rene Pijlman <rpijlman@wanadoo.nl>) |
Responses |
Re: Re: Proposal to fix Statement.executeBatch()
|
List | pgsql-jdbc |
Rene, This writeup looks good. As for your two followup questions: 1) I agree with your opinion on the select issue. Since selects are the only way to call functions and functions could themselves be doing inserts/updates, to not allow selects is a bad idea IMHO. 2) The 8K limitation was removed from the database in 7.0 and from the JDBC driver in 7.0.2. Therefore I don't think we should code for the case of a 7.2 JDBC driver needing to support a 6.5 database. thanks, --Barry Rene Pijlman wrote: > On Mon, 27 Aug 2001 22:57:13 -0700, Barry Lind wrote: > >>I do not know what the server does if you have autocommit enabled >>and you issue multiple statements in one try. >> > > As you know, Peter Eisentraut said on hackers that all > statements in a semicolon-separated query string are processed > as one single transaction. So, if in "S1;S2;S3" S2 fails, both > S1, S2 and S3 are rolled back and in effect they have all > failed. > > I think this means we can implement your proposal. I've removed > the paragraph saying that we can't and I've added the following > to http://lab.applinet.nl/postgresql-jdbc/#Batch > > -+-+- > > The current implementation of Statement.executeBatch() in the > JDBC driver does not provide any performance improvement > compared to processing statements individually. This is because > the driver executes the statements one-by-one when > executeBatch() is called, using one round trip per statement. We > intend to reimplement executeBatch() in the following way. > > Statement.executeBatch() will send all statements in a single > semicolon separated query string, with only one round trip to > the backend. This will provide a performance improvement, as > intended by the JDBC specification. > > The updateCounts array will be set as described below. Note that > the JDBC spec defines the meaning of the following special > values: > > -2 the statement was executed successfully but the number of > affected rows is unknown > -3 the statement failed to execute successfully > > If all statements succeed, executeBatch() returns an > updateCounts array with a row count for each statement in the > batch, however the value for all but the last statement will be > -2. The value for the last statement will be a proper update > count. > If a statement fails, executeBatch() throws a > BatchUpdateException containing an updateCounts array with a row > count of -3 for each statement in the batch. > > Note that the behaviour will be the same when autocommit is > enabled and when it is disabled. Even with autocommit enabled, > the backend will commit or rollback all statements in the > semicolon-separated query string as a unit. > > The new implementation of executeBatch() will cause a change in > behaviour of the driver: the driver will no longer return update > counts for all statements in a batch like it currently does, it > will return -2 ("unknown") for most statements. However, this > behaviour is allowed by the JDBC spec and applications should be > prepared to handle it. > > -+-+- > > I see two more issues we need to decide on... > > 1) The JDBC spec requires Statement.executeBatch() to throw a > BatchUpdateException if any of the statements does not return an > update count (e.g. is a SELECT). How can we implement this? Do > we need to parse the statements in the JDBC driver to detect > SELECT's? It is a matter of interpretation, but it seems OK to > me to just ignore this and return -2/-3 for SELECT's as well. In > fact, perhaps we should allow SELECT's for function calls!? > > 2) The reimplementation may cause the driver to send very long > statements to the backend. I heard something about an 8K limit. > In what version of the backend was this limitation removed? I > guess we should implement the new algorithm conditionally, so > we'll only send multi-statement query strings to a backend that > has no statement length limitation. > > Regards, > René Pijlman <rene@lab.applinet.nl> >
pgsql-jdbc by date: