Re: Re: Too many open files (was Re: spinlock problems reported earlier) - Mailing list pgsql-hackers

From Tatsuo Ishii
Subject Re: Re: Too many open files (was Re: spinlock problems reported earlier)
Date
Msg-id 20001224114245Z.t-ishii@sra.co.jp
Whole thread Raw
In response to Re: Re: Too many open files (was Re: spinlock problems reported earlier)  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Re: Too many open files (was Re: spinlock problems reported earlier)
List pgsql-hackers
> Department of Things that Fell Through the Cracks:
> 
> Back in August we had concluded that it is a bad idea to trust
> "sysconf(_SC_OPEN_MAX)" as an indicator of how many files each backend
> can safely open.  FreeBSD was reported to return 4136, and I have
> since noticed that LinuxPPC returns 1024.  Both of those are
> unreasonably large fractions of the actual kernel file table size.
> A few dozen backends opening hundreds of files apiece will fill the
> kernel file table on most Unix platforms.
> 
> I'm not sure why this didn't get dealt with, but I think it's a "must
> fix" kind of problem for 7.1.  The dbadmin has *got* to be able to
> limit Postgres' appetite for open file descriptors.
> 
> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,
> with a default value of about 100.  A new backend would set its
> max-files setting to the smaller of this parameter or
> sysconf(_SC_OPEN_MAX).

Seems nice idea. We have been heard lots of problem reports caused by
ruuning out of the file table.

However it would be even nicer, if it could be configurable at runtime
(at the postmaster starting up time) like -N option. Maybe
MAX_FILES_PER_PROCESS can be a hard limit?
--
Tatsuo Ishii


pgsql-hackers by date:

Previous
From: Alfred Perlstein
Date:
Subject: Re: Re: Too many open files (was Re: spinlock problems reported earlier)
Next
From: Tom Lane
Date:
Subject: Re: Re: Too many open files (was Re: spinlock problems reported earlier)