Thread: number of transactions doubling
Hi
I have a strange problem with the number of transactions generated within my application. I use jdbc batching to submit insert statements from a jboss app server to postgres 8.1.2.
A batch can have from 100 to 3000 inserts.
I noticed the following:
When I run the following query
select current_timestamp,datname,age(datfrozenxid) from pg_database;
The age columns shows say 1,500,000,000
When I run the same query after say 10 minutes, it shows 1,500,600,000
I have issued about 40,000 inserts via jdbc batching in 10 minutes.
It appears that the jdbc driver is generating twice the number of transactions as inserts. This behaviour appears consistent with the number of inserts that I generate
The target table has triggers that route data to appropriate tables. The tables to which data is routed has check constraints that do further inserts. (All of this happens in 1 jdbc transaction)
I expect JDBC Batching to generate fewer transactions depending on batch size.
The mystery is why am I seeing double the number of transactions being generated.
This is causing transaction id wraparound limits to be approached quicker than I would like.
Sriram Dandapani wrote: > The target table has triggers that route data to appropriate tables. The > tables to which data is routed has check constraints that do further > inserts. (All of this happens in 1 jdbc transaction) > > I expect JDBC Batching to generate fewer transactions depending on > batch size. Have you turned autocommit off? -O
Autocommit is set to off once a connection is obtained from the jboss pool. It is turned back on when it is closed (so that any idle transactions are committed). I had the following 2 lines in my postgres-ds.xml which I commented <!-- <new-connection-sql>select 1</new-connection-sql> --> <!-- <check-valid-connection-sql>select 1</check-valid-connection-sql> --> Even after this, I am seeing twice the number of transactions. -----Original Message----- From: Oliver Jowett [mailto:oliver@opencloud.com] Sent: Thursday, September 28, 2006 3:24 PM To: Sriram Dandapani Cc: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org Subject: Re: [JDBC] number of transactions doubling Sriram Dandapani wrote: > The target table has triggers that route data to appropriate tables. The > tables to which data is routed has check constraints that do further > inserts. (All of this happens in 1 jdbc transaction) > > I expect JDBC Batching to generate fewer transactions depending on > batch size. Have you turned autocommit off? -O
FYI..I use the postgres8.1.404 jdbc driver -----Original Message----- From: Oliver Jowett [mailto:oliver@opencloud.com] Sent: Thursday, September 28, 2006 3:24 PM To: Sriram Dandapani Cc: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org Subject: Re: [JDBC] number of transactions doubling Sriram Dandapani wrote: > The target table has triggers that route data to appropriate tables. The > tables to which data is routed has check constraints that do further > inserts. (All of this happens in 1 jdbc transaction) > > I expect JDBC Batching to generate fewer transactions depending on > batch size. Have you turned autocommit off? -O
Sriram Dandapani wrote: > Autocommit is set to off once a connection is obtained from the jboss > pool. It is turned back on when it is closed (so that any idle > transactions are committed). Don't know what to suggest then -- if autocommit is off then the JDBC driver should be sending BEGIN before the first query & COMMIT or ROLLBACK when you explicitly delimit a transaction. There's nothing special about batches. You could set loglevel=2 as a JDBC URL parameter to see exactly what the JDBC driver is sending, perhaps. Or turn on logging on the server side, though I'm not sure at exactly which server version the logging of the extended query protocol became useful. -O
On Thu, Sep 28, 2006 at 02:51:24PM -0700, Sriram Dandapani wrote: > I have a strange problem with the number of transactions generated > within my application. I use jdbc batching to submit insert statements > from a jboss app server to postgres 8.1.2. > > A batch can have from 100 to 3000 inserts. > > I noticed the following: > > When I run the following query > > select current_timestamp,datname,age(datfrozenxid) from pg_database; > > The age columns shows say 1,500,000,000 > > When I run the same query after say 10 minutes, it shows 1,500,600,000 > > I have issued about 40,000 inserts via jdbc batching in 10 minutes. > > It appears that the jdbc driver is generating twice the number of > transactions as inserts. This behaviour appears consistent with the > number of inserts that I generate Uh... you're inserting 40k rows and getting 600k transactions... how are you concluding that the jdbc driver is generating 80k transactions? > The target table has triggers that route data to appropriate tables. The > tables to which data is routed has check constraints that do further > inserts. (All of this happens in 1 jdbc transaction) Actually, no matter what JDBC is doing, all of that will happen within a single transaction on the database (unless you're using something like dblink from within the triggers). So even if you were issuing insert statements with autocommit on, you'd see at most one transaction per insert. > I expect JDBC Batching to generate fewer transactions depending on > batch size. > > The mystery is why am I seeing double the number of transactions being > generated. > > This is causing transaction id wraparound limits to be approached > quicker than I would like. As for your autocommit bit-flipping, why don't you just issue either a rollback or a commit when you release the connection? -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Jim C. Nasby wrote: > On Thu, Sep 28, 2006 at 02:51:24PM -0700, Sriram Dandapani wrote: > >> The target table has triggers that route data to appropriate tables. The >> tables to which data is routed has check constraints that do further >> inserts. (All of this happens in 1 jdbc transaction) >> > > Actually, no matter what JDBC is doing, all of that will happen within a > single transaction on the database (unless you're using something like > dblink from within the triggers). So even if you were issuing insert > statements with autocommit on, you'd see at most one transaction per > insert. > The triggers might use subtransactions. You get implicit subtransactions if have an EXCEPTION clause in a plpgsql function. I'm not sure if there's other things that do that as well. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Sep 29, 2006, at 4:56 AM, Heikki Linnakangas wrote: > Jim C. Nasby wrote: >> On Thu, Sep 28, 2006 at 02:51:24PM -0700, Sriram Dandapani wrote: >> >>> The target table has triggers that route data to appropriate >>> tables. The >>> tables to which data is routed has check constraints that do further >>> inserts. (All of this happens in 1 jdbc transaction) >>> >> Actually, no matter what JDBC is doing, all of that will happen >> within a >> single transaction on the database (unless you're using something >> like >> dblink from within the triggers). So even if you were issuing insert >> statements with autocommit on, you'd see at most one transaction per >> insert. >> > > The triggers might use subtransactions. You get implicit > subtransactions if have an EXCEPTION clause in a plpgsql function. > I'm not sure if there's other things that do that as well. Ahh, I thought that was handled within cmin/cmax. -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)