Thread: Out of memory
I’ve been getting database out of memory failures with some queries which deal with a reasonable amount of data.
I was wondering what I should be looking at to stop this from happening.
The typical messages I been getting are like this: http://pastebin.com/Jxfu3nYm
The OS is:
Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 x86_64 GNU/Linux.
It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is dedicated to PostgreSQL. The main OS parameters I have tuned are:
vm.swappiness=0
vm.overcommit_memory=2
kernel.shmmax = 4196769792
kernel.shmall = 1024602
And the PostgreSQL is:
PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit.
The main changed psql parameters I’ve tuned are:
shared_buffers = 2048MB
maintenance_work_mem = 512MB
work_mem = 200MB
wal_buffers = 16MB
effective_cache_size = 4094MB
I have also try lowering the shared_buffers down to 1GB but it still ran out of memory.
Cheers,
Jeremy
If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz ) and destroy the original message.
On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer <JPalmer@linz.govt.nz> wrote: > I’ve been getting database out of memory failures with some queries which > deal with a reasonable amount of data. > > I was wondering what I should be looking at to stop this from happening. > > The typical messages I been getting are like this: > http://pastebin.com/Jxfu3nYm > The OS is: > > Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC > 2011 x86_64 GNU/Linux. > > It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is > dedicated to PostgreSQL. The main OS parameters I have tuned are: > > work_mem = 200MB That's a really big work_mem. I have mainline db servers with 128G of ram that have work_mem set to 16M and that is still considered a little high in my book. If you drop work_mem down to 1MB does the out of memory go away? work_mem is how much memory EACH sort can use on its own, if you have a plpgsql procedure that keeps running query after query, it could use a LOT of memory really fast.
Hi Scott, It was the work_mem that was set too high. I reduced it to 32mb and the function executed. Just so I understand this. Every time a sort is performed within a function, the sort memory is allocated, and then it notreleased until the function completes? Rather then deallocating the memory after each sort operation has completed. Thanks, Jeremy ________________________________________ From: Scott Marlowe [scott.marlowe@gmail.com] Sent: Friday, 25 March 2011 5:04 p.m. To: Jeremy Palmer Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Out of memory On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer <JPalmer@linz.govt.nz> wrote: > I’ve been getting database out of memory failures with some queries which > deal with a reasonable amount of data. > > I was wondering what I should be looking at to stop this from happening. > > The typical messages I been getting are like this: > http://pastebin.com/Jxfu3nYm > The OS is: > > Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC > 2011 x86_64 GNU/Linux. > > It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is > dedicated to PostgreSQL. The main OS parameters I have tuned are: > > work_mem = 200MB That's a really big work_mem. I have mainline db servers with 128G of ram that have work_mem set to 16M and that is still considered a little high in my book. If you drop work_mem down to 1MB does the out of memory go away? work_mem is how much memory EACH sort can use on its own, if you have a plpgsql procedure that keeps running query after query, it could use a LOT of memory really fast. ______________________________________________________________________________________________________ This message contains information, which is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz) and destroythe original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank you. ______________________________________________________________________________________________________
Hi. Your idea is cool - i think this feature ought to be added to TODO. Sorted rows should be materialized when memory is exhaused, and memory reused. 2011/3/26, Jeremy Palmer <JPalmer@linz.govt.nz>: > Hi Scott, > > It was the work_mem that was set too high. I reduced it to 32mb and the > function executed. > > Just so I understand this. Every time a sort is performed within a function, > the sort memory is allocated, and then it not released until the function > completes? Rather then deallocating the memory after each sort operation has > completed. > > Thanks, > Jeremy > > ________________________________________ > From: Scott Marlowe [scott.marlowe@gmail.com] > Sent: Friday, 25 March 2011 5:04 p.m. > To: Jeremy Palmer > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Out of memory > > On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer <JPalmer@linz.govt.nz> wrote: >> I’ve been getting database out of memory failures with some queries which >> deal with a reasonable amount of data. >> >> I was wondering what I should be looking at to stop this from happening. >> >> The typical messages I been getting are like this: >> http://pastebin.com/Jxfu3nYm >> The OS is: >> >> Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC >> 2011 x86_64 GNU/Linux. >> >> It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is >> dedicated to PostgreSQL. The main OS parameters I have tuned are: >> >> work_mem = 200MB > > That's a really big work_mem. I have mainline db servers with 128G of > ram that have work_mem set to 16M and that is still considered a > little high in my book. If you drop work_mem down to 1MB does the out > of memory go away? work_mem is how much memory EACH sort can use on > its own, if you have a plpgsql procedure that keeps running query > after query, it could use a LOT of memory really fast. > ______________________________________________________________________________________________________ > > This message contains information, which is confidential and may be subject > to legal privilege. > If you are not the intended recipient, you must not peruse, use, > disseminate, distribute or copy this message. > If you have received this message in error, please notify us immediately > (Phone 0800 665 463 or info@linz.govt.nz) and destroy the original message. > LINZ accepts no responsibility for changes to this email, or for any > attachments, after its transmission from LINZ. > > Thank you. > ______________________________________________________________________________________________________ > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- ------------ pasman
I'm wondering if there is a way to estimate the total amount of work memory that will be used for a single query (or morespecifically a plpgsql function that runs a series of queries) The database that I'm setting up is a data warehouse which typically only has one query running at any given time. Thanks, Jeremy -----Original Message----- From: Jeremy Palmer Sent: Saturday, 26 March 2011 9:57 p.m. To: Scott Marlowe Cc: pgsql-general@postgresql.org Subject: RE: [GENERAL] Out of memory Hi Scott, It was the work_mem that was set too high. I reduced it to 32mb and the function executed. Just so I understand this. Every time a sort is performed within a function, the sort memory is allocated, and then it notreleased until the function completes? Rather then deallocating the memory after each sort operation has completed. Thanks, Jeremy ________________________________________ From: Scott Marlowe [scott.marlowe@gmail.com] Sent: Friday, 25 March 2011 5:04 p.m. To: Jeremy Palmer Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Out of memory On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer <JPalmer@linz.govt.nz> wrote: > I've been getting database out of memory failures with some queries which > deal with a reasonable amount of data. > > I was wondering what I should be looking at to stop this from happening. > > The typical messages I been getting are like this: > http://pastebin.com/Jxfu3nYm > The OS is: > > Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC > 2011 x86_64 GNU/Linux. > > It's a running on VMWare and, has 2 CPU's and 8GB of RAM. This VM is > dedicated to PostgreSQL. The main OS parameters I have tuned are: > > work_mem = 200MB That's a really big work_mem. I have mainline db servers with 128G of ram that have work_mem set to 16M and that is still considered a little high in my book. If you drop work_mem down to 1MB does the out of memory go away? work_mem is how much memory EACH sort can use on its own, if you have a plpgsql procedure that keeps running query after query, it could use a LOT of memory really fast. ______________________________________________________________________________________________________ This message contains information, which is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz) and destroythe original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank you. ______________________________________________________________________________________________________