Thread: Postgres kills machine
Periodically we have queries (can't log in to see which ones) that kill our machine by consuming huge amounts of resource. I thought that I had fixed this by writing a cron job that vacuums all the databases every night, but today our machine froze again. We run PG on Linux, for the machine in question is it Mandrake 10.0. The errors in the syslog are all to do with VM, so I assume that we are running out of RAM. There is 1.5GB of RAM in the box which should be more than enough for the databases that we run. I know that I can set process limits in the limits.conf file, but can anyone suggest sensible values to put in there to limit the max memory to 200MB per pg process? I have read that limits.conf only works if PG is linked using PAM and someone in a thread said that PG doesn't use PAM. Can someone please advise me on this? Thanks, Brad
Bradley Kieser <brad@kieser.net> writes: > Periodically we have queries (can't log in to see which ones) that kill > our machine by consuming huge amounts of resource. The first thing you need to do is find out what they are so you can fix them. Turning on logging of all statements would be a worthwhile investment. > The errors in the syslog are all to do with VM, so I assume that we are > running out of RAM. There is 1.5GB of RAM in the box which should be > more than enough for the databases that we run. What have you got sort_mem and max_connections set to? > I know that I can set process limits in the limits.conf file, but can > anyone suggest sensible values to put in there to limit the max memory > to 200MB per pg process? Personally I'd put "ulimit -d 200000000" into the postmaster startup script. > I have read that limits.conf only works if PG is linked using PAM and > someone in a thread said that PG doesn't use PAM. Can someone please > advise me on this? PAM has nothing to do with per-process limits. regards, tom lane