Re: Workarounds for getBinaryStream returning ByteArrayInputStream on bytea - Mailing list pgsql-jdbc
From | Radosław Smogura |
---|---|
Subject | Re: Workarounds for getBinaryStream returning ByteArrayInputStream on bytea |
Date | |
Msg-id | d6544fb5a3293dfb4ac6586893ab3417@smogura-softworks.eu Whole thread Raw |
In response to | Re: Workarounds for getBinaryStream returning ByteArrayInputStream on bytea (Александър Шопов <lists@kambanaria.org>) |
Responses |
Re: Workarounds for getBinaryStream returning ByteArrayInputStream
on bytea
|
List | pgsql-jdbc |
Hi, I would like to send few files for getBinaryStream(). So this will work much like stream and will don't eat so much heap. I don't copy source this_row[i] array, so I don't know how this will do with concur updates, (original method doesn't make this when column is not bytea, too). I left few comments if we should throw exception on broken streams in 8.4, or just silence notify EOF. One thing in the below code is to change to PSQLException. Below is AbstractJdbc2ResultSet.getBinaryStream public InputStream getBinaryStream(int columnIndex) throws SQLException { checkResultSet( columnIndex ); if (wasNullFlag) return null; if (connection.haveMinimumCompatibleVersion("7.2")) { //Version 7.2 supports BinaryStream for all PG bytea type //As the spec/javadoc for this method indicate this is to be used for //large binary values (i.e. LONGVARBINARY) PG doesn't have a separate //long binary datatype, but with toast the bytea datatype is capable of //handling very large values. Thus the implementation ends up calling //getBytes() since there is no current way to stream the value from the server //Copy of some logic from getBytes //Version 7.2 supports the bytea datatype for byte arrays if (fields[columnIndex - 1].getOID() == Oid.BYTEA) { //TODO: Move to datacast in future final byte[] bytes = this_row[columnIndex - 1]; // Starting with PG 9.0, a new hex format is supported // that starts with "\x". Figure out which format we're // dealing with here. // if (bytes.length < 2 || bytes[0] != '\\' || bytes[1] != 'x') { return new PGByteaTextInputStream_8_4(bytes, (maxFieldSize > 0 && isColumnTrimmable(columnIndex)) ? maxFieldSize : Long.MAX_VALUE); }else { if (bytes.length % 2 == 1) getExceptionFactory().createException( GT.tr("Unexpected bytea result size, should be even."), PSQLState.DATA_ERROR); return new PGByteaTextInputStream_9_0_1(bytes, (maxFieldSize > 0 && isColumnTrimmable(columnIndex)) ? maxFieldSize : Long.MAX_VALUE); } }else { return new ByteArrayInputStream(getBytes(columnIndex)); } } else { // In 7.1 Handle as BLOBS so return the LargeObject input stream if ( fields[columnIndex - 1].getOID() == Oid.OID) { LargeObjectManager lom = connection.getLargeObjectAPI(); LargeObject lob = lom.open(getLong(columnIndex)); return lob.getInputStream(); } } return null; } On Thu, 25 Nov 2010 00:53:31 +0200, Александър Шопов <lists@kambanaria.org> wrote: > В 16:04 -0600 на 24.11.2010 (ср), Radosław Smogura написа: >> I see only two possibilities >> 1. Decrease fetch size, e.g. to 1. > Even if I do, bytea is potentially 1GB. Plus peaks in usage can still > smash the heap. > So refactoring to BLOBs is perhaps the only way out. > Will the JDBC driver always present bytea InputStream as > ByteArrayInputStream? No plans to change that? (even if there are, I > will still have to refactor meanwhile). > Perhaps this behaviour should be better communicated to DB schema > designers. > It seems to me from the Npgsql2.0.11 readme.txt that reading in chunks > is provided for .Net. > Is there need to perhaps make patches for this in the jdbc driver? > Kind regards: > al_shopov -- ---------- Radosław Smogura http://www.softperience.eu
Attachment
pgsql-jdbc by date: