Re: repeated out of shared memory error - not related tomax_locks_per_transaction - Mailing list pgsql-admin
From | Fabio Pardi |
---|---|
Subject | Re: repeated out of shared memory error - not related tomax_locks_per_transaction |
Date | |
Msg-id | 0468ca31-f6cc-dc4a-9ce6-42b4da927b06@portavita.eu Whole thread Raw |
In response to | Re: repeated out of shared memory error - not related to max_locks_per_transaction (MichaelDBA <MichaelDBA@sqlexec.com>) |
Responses |
R: repeated out of shared memory error - not related to max_locks_per_transaction
|
List | pgsql-admin |
Michael,
I think we are talking about 2 different scenarios.
1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.
2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.
Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.
regards,
fabio pardi
wrong again, Fabio. PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem. Here is a quote from Citus about the dangers of setting work_mem too high.
When you consume more memory than is available on your machine you can start to see out ofout of memory
errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.
When you see anout of memory
error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory thatwork_mem
uses. Yes, you read that right: out-of-memory it’s better to decreasework_mem
instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.
https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/
Regards,
Michael VitaleFriday, July 20, 2018 9:19 AMNope Michael,
if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.
period.
Based on your log settings, it might appear in the logs, under 'temporary file created..'.
regards,
fabio pardi
On 20/07/18 15:00, MichaelDBA wrote:Friday, July 20, 2018 9:00 AMI do not think that is true. Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint. They are not constrained by what real memory is available, hence the memory error! They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.
Regards,
Michael Vitale
Friday, July 20, 2018 8:47 AMwork_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk
regards,
fabio pardi
On 20/07/18 14:35, MichaelDBA wrote:Friday, July 20, 2018 8:35 AMPerhaps your "work_mem" setting is causing the memory problems. Try reducing it to see if that alleviates the problem.
Regards,
Michael Vitale
Friday, July 20, 2018 8:32 AMI would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.
Double check the documentation.
Lance
Sent from my iPad
pgsql-admin by date: