Re: Connections dropping while using Postgres backend DB with Ejabberd - Mailing list pgsql-general

From Dipanjan Ganguly
Subject Re: Connections dropping while using Postgres backend DB with Ejabberd
Date
Msg-id CAFpiWLSOejUr=dyE5GWbdADwjrCqvm_YSmeTdcr3-G_F7nzC9Q@mail.gmail.com
Whole thread Raw
In response to Re: Connections dropping while using Postgres backend DB with Ejabberd  (Michael Lewis <mlewis@entrata.com>)
List pgsql-general
Thanks Michael for the recommendation and clarification.

Will try the with 32 MB on my next run.

BR,
Dipanjan

On Tue, Feb 25, 2020 at 10:51 PM Michael Lewis <mlewis@entrata.com> wrote:
work_mem can be used many times per connection given it is per sort, hash, or other operations and as mentioned that can be multiplied if the query is handled with parallel workers. I am guessing the server has 16GB memory total given shared_buffers and effective_cache_size, and a more reasonable work_mem setting might be on the order of 32-64MB.

Depending on the type of work being done and how quickly the application releases the db connection once it is done, max connections might be on the order of 4-20x the number of cores I would expect. If more simultaneous users need to be serviced, a connection pooler like pgbouncer or pgpool will allow those connections to be re-used quickly.

These numbers are generalizations based on my experience. Others with more experience may have different configurations to recommend.

pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: LDAP with TLS is taking more time in Postgresql 11.5
Next
From: Paul A Jungwirth
Date:
Subject: Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]