Thread: RE: Postgresql 7.0 JDBC exceptions - broken connecti ons ?
As usual when replying from here, replies prefixed with PM: -- Peter Mount Enterprise Support Maidstone Borough Council Any views stated are my own, and not those of Maidstone Borough Council. -----Original Message----- From: Gunnar R|nning [mailto:gunnar@candleweb.no] Sent: Thursday, May 25, 2000 12:38 PM To: pgsql-interfaces@postgresql.org Subject: [INTERFACES] Postgresql 7.0 JDBC exceptions - broken connections ? Hello, As I told you in an former mail I'm trying to migrate an application to use PostgreSQL 7.0. The application now seems to be working pretty well and I have 5 users that have been testing our web application with the PostgreSQL database for the past 24 hours. The application runs with a connection pool, but after some time some of these connections seems to be broken. Ie. only some of the queries work - I will change the connection pool code to handle this, but I would like to know if anybody know why the connections gets into an unusable state. Could it be back crashes or similar things ? I'm turning on debugging for the database server to see if can find anything there, but anyway here is the exception I get : PM: How long is it before the problem starts? I'm wondering if the problem is because the backend is sitting there for a long period. select distinct entity.*,location.loc_id,location.loc_name from entity,locationmap,location,entityindex2 as e0 where locationmap.ent_id=entity.ent_id and locationmap.loc_id=location.loc_id and e0.ei_word='kjøttbørsen' and e0.ent_id=entity.ent_id and ENT_STATUS=4 order by ent_title,location.loc_name,location.loc_id Unknown Response Type u PM: Does anyone [on Hackers] know what the u code is for? The fact it's in lower case tells me that the protocol/connection got broken somehow.
Peter Mount <petermount@it.maidstone.gov.uk> writes: > PM: How long is it before the problem starts? I'm wondering if the > problem is because the backend is sitting there for a long period. The problem started after the connections had been for about 16 hours or so. So this could be the problem. However yesterday I restarted my database with some new options to do more logging and also print some debug information. I haven't seen any of the exceptions I reported yesterday in the past 20 hours after the restart. > Unknown Response Type u > > PM: Does anyone [on Hackers] know what the u code is for? The fact it's > in lower case tells me that the protocol/connection got broken somehow. I got a lot of these errors and the response type varied between different characters, so your theory seems plausible. Regards, Gunnar
Peter Mount <petermount@it.maidstone.gov.uk> writes: > Unknown Response Type u > PM: Does anyone [on Hackers] know what the u code is for? The fact it's > in lower case tells me that the protocol/connection got broken somehow. There is no 'u' message code. Looks to me like the client got out of sync with the backend and is trying to interpret data as the start of a message. I think that this and the "Tuple received before MetaData" issue could have a common cause, namely running out of memory on the client side and not recovering well. libpq is known to emit its equivalent of "Tuple received before MetaData" when the backend hasn't violated the protocol at all. What happens is that libpq runs out of memory while trying to accumulate a large query result, "recovers" by resetting itself to no-query-active state, and then is surprised when the next message is another tuple. (Obviously this error recovery plan needs work, but no one's got round to it yet.) I wonder whether the JDBC driver has a similar problem, and whether these queries could have been retrieving enough data to trigger it? Another possibility is that the client app is failing to release query results when done with them, which would eventually lead to an out-of-memory condition even with not-so-large queries. regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes: > I think that this and the "Tuple received before MetaData" issue could > have a common cause, namely running out of memory on the client side > and not recovering well. libpq is known to emit its equivalent of > "Tuple received before MetaData" when the backend hasn't violated the > protocol at all. What happens is that libpq runs out of memory while > trying to accumulate a large query result, "recovers" by resetting > itself to no-query-active state, and then is surprised when the next > message is another tuple. (Obviously this error recovery plan needs > work, but no one's got round to it yet.) I wonder whether the JDBC > driver has a similar problem, and whether these queries could have > been retrieving enough data to trigger it? > This could be a possible explanation, as some of the queries may indeed retrieve large amounts of data. I have also noticed a couple of "Out of Memory" exceptions that could be related(This seem to be "temporary" out of memory exceptions, and not permanent memory leaks; so I guess these could be caused by queries returning huge amounts of data). > Another possibility is that the client app is failing to release > query results when done with them, which would eventually lead to > an out-of-memory condition even with not-so-large queries. I don't think this is the case. I've been running the application through OptimizeIT to profile memory and CPU usage and I haven't been able to spot any memory leakages in the driver; The quality of the JDBC driver is actually our main reason to migrate our application to PostgreSQL. regards, Gunnar