Thread: Is there any method to limit resource usage in PG?
Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory is not enough.
My customer reported that when they run a program they found the totall memory and disk i/o usage all reached to threshold value(80%).
That program is written by Java.
It is to use JDBC to pull out data from DB, while the query joined some table together, It will return about 3000,000 records.
Then the program will use JDBC again to write the records row by row , to inert into another table in the DB.
My first question is:
Currently, my customer can not update there applications , and also they have no budget to buy new hardware.
In fact, my customer want the program 's JDBC related part can be dealed with by PG at this way:
The PG can control all of its process to run under usage threshold.
If the resource will reach the threshold, then PG will slow processes down under the threshold.
I know that in Oracle, that there are resource plan etc. to approach this.
But in PG, I haven't find it.
And I also want to know:
When there are so many records need to be thrown to client,
will PG use as much memory as it can?
If the total amount of records is larger than shared_buffer, will PG hung? or just do its work and communicate with client for a few times till all the data is transferred?
Best Regards
On 8/25/2013 11:08 PM, 高健 wrote: > That program is written by Java. > It is to use JDBC to pull out data from DB, while the query joined > some table together, It will return about 3000,000 records. > Then the program will use JDBC again to write the records row by row > , to inert into another table in the DB. why not have SQL do the insert of the data you're querying? is java doing any actually processing on this data thats not practical to do within SQL ? otherwise, if you need to process the data in the client before returning it to the database, you probably should use a cursor so you can batch the data. -- john r pierce 37N 122W somewhere on the middle of the left coast
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of ?? Sent: Monday, August 26, 2013 2:08 AM To: pgsql-general Subject: [GENERAL] Is there any method to limit resource usage in PG? Hello: Sorry for disturbing. I am now encountering a serious problem: memory is not enough. My customer reported that when they run a program they found the totall memory and disk i/o usage all reached to thresholdvalue(80%). That program is written by Java. It is to use JDBC to pull out data from DB, while the query joined some table together, It will return about 3000,000 records. Then the program will use JDBC again to write the records row by row , to inert into another table in the DB. My first question is: Currently, my customer can not update there applications , and also they have no budget to buy new hardware. In fact, my customer want the program 's JDBC related part can be dealed with by PG at this way: The PG can control all of its process to run under usage threshold. If the resource will reach the threshold, then PG will slow processes down under the threshold. I know that in Oracle, that there are resource plan etc. to approach this. But in PG, I haven't find it. And I also want to know: When there are so many records need to be thrown to client, will PG use as much memory as it can? If the total amount of records is larger than shared_buffer, will PG hung? or just do its work and communicate with clientfor a few times till all the data is transferred? Best Regards No, PG does not have feature similar to Oracle's Resource Manager. Regards, Igor Neyman
On Sun, Aug 25, 2013 at 11:08 PM, 高健 <luckyjackgao@gmail.com> wrote: > Hello: > > Sorry for disturbing. > > I am now encountering a serious problem: memory is not enough. > > My customer reported that when they run a program they found the totall > memory and disk i/o usage all reached to threshold value(80%). > > That program is written by Java. > It is to use JDBC to pull out data from DB, while the query joined some > table together, It will return about 3000,000 records. > Then the program will use JDBC again to write the records row by row , to > inert into another table in the DB. What is using the memory, the postgres backend or the client program? Cheers, Jeff
Hi:
Now the situation goes there:
In the testing environment,
even when my customer changed shared_buffers from 1024MB to 712MB or 512MB,
The total memory consumption is still almost the same.
I think that PG is always using as much resource as it can,
For a query and insert action,
Firstly , the data is pull into private memory of the backend process which is service client.
Then, the backend process push the data into shared memory, here into shared_buffers.
If the shared_buffers is not big enough to hold all the result data, then part of data will be in shared_buffer,
the other data will still remain in backend process's memory.
Is my understanding right?
Best Regard
2013/8/27 Jeff Janes <jeff.janes@gmail.com>
On Sun, Aug 25, 2013 at 11:08 PM, 高健 <luckyjackgao@gmail.com> wrote:What is using the memory, the postgres backend or the client program?
> Hello:
>
> Sorry for disturbing.
>
> I am now encountering a serious problem: memory is not enough.
>
> My customer reported that when they run a program they found the totall
> memory and disk i/o usage all reached to threshold value(80%).
>
> That program is written by Java.
> It is to use JDBC to pull out data from DB, while the query joined some
> table together, It will return about 3000,000 records.
> Then the program will use JDBC again to write the records row by row , to
> inert into another table in the DB.
Cheers,
Jeff
On 8/27/2013 6:49 PM, 高健 wrote: > For a query and insert action, > Firstly , the data is pull into private memory of the backend > process which is service client. if you're returning a single result of 3 million records, yes, you're going to need memory to store that entire result set before you can do anything with it. again, if you're just writing this data into another table, why not do it all in SQL ? INSERT INTO newtable (field1,field2,field3,...) SELECT <your complex 3 million row query here>; that will do the whole thing without having to move any data into client space. this will be faster and more memory efficient. now, if your Java client HAS to process the data its selecting and do complex stuff with it that you just don't think SQL can/should do, then you'll need to use a CURSOR. DECLARE cursorname CURSOR FOR SELECT <your messy query here>; then loop on ... FETCH 100 FROM cursorname; to return the next 100 rows of this cursor. once you're done with it, CLOSE cursorname; will delete the cursor. I suggest you read the specifics of these cursor commands to fully understand them. -- john r pierce 37N 122W somewhere on the middle of the left coast
John R Pierce <pierce@hogranch.com> wrote: > On 8/27/2013 6:49 PM, 高健 wrote: >> For a query and insert action, >> Firstly , the data is pull into private memory of the backend >> process which is service client. > if you're just writing this data into another table, why not do > it all in SQL ? > > INSERT INTO newtable (field1,field2,field3,...) SELECT <your > complex 3 million row query here>; > > that will do the whole thing without having to move any data into client > space. this will be faster and more memory efficient. Agreed. > now, if your Java client HAS to process the data its selecting and do > complex stuff with it that you just don't think SQL can/should do, then > you'll need to use a CURSOR. Since this is Java, if the data is going to be dragged back to the client and pushed back across to the server, this is probably the right way to go about that: http://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company