Re: Postgres configuration for 64 CPUs, 128 GB RAM... - Mailing list pgsql-performance
From | Gavin M. Roy |
---|---|
Subject | Re: Postgres configuration for 64 CPUs, 128 GB RAM... |
Date | |
Msg-id | af1bce590707201718r7ae94471hc42ccdefa8654ba8@mail.gmail.com Whole thread Raw |
In response to | Postgres configuration for 64 CPUs, 128 GB RAM... ("Marc Mamin" <M.Mamin@intershop.de>) |
List | pgsql-performance |
Postgres configuration for 64 CPUs, 128 GB RAM...
Hello,
We have the oppotunity to benchmark our application on a large server. I have to prepare the Postgres configuration and I'd appreciate some comments on it as I am not experienced with servers of such a scale. Moreover the configuration should be fail-proof as I won't be able to attend the tests.
Our application (java + perl) and Postgres will run on the same server, whereas the application activity is low when Postgres has large transactions to process.
There is a large gap between our current produtcion server (Linux, 4GB RAM, 4 cpus) and the benchmark server; one of the target of this benchmark is to verify the scalability of our application.
And you have no reason to be envious as the server doesn't belong us :-)
Thanks for your comments,
Marc Mamin
Posgres version: 8.2.1
Server Specifications:
----------------------Sun SPARC Enterprise M8000 Server:
http://www.sun.com/servers/highend/m8000/specs.xml
File system:
http://en.wikipedia.org/wiki/ZFS
Planned configuration:
--------------------------------# we don't expect more than 150 parallel connections,
# but I suspect a leak in our application that let some idle connections openmax_connections=2000
ssl = off
#maximum allowed
shared_buffers= 262143# on our current best production server with 4GB RAM (not dedicated to Postgres), work_mem is set to 600 MB
# this limitation is probably the bottleneck for our application as the files in pgsql_tmp grows up to 15 GB
# during large aggregations (we have a locking mechanismus to avoid parallel processing of such transactions)
work_mem = 31457280 # (30 GB)# index creation time is also an issue for us; the process is locking other large processes too.
# our largest table so far is 13 GB + 11 GB indexes
maintenance_work_mem = 31457280 # (30 GB)# more than the max number of tables +indexes expected during the benchmark
max_fsm_relations = 100000max_fsm_pages = 1800000
# don't know if I schoud modify this.
# seems to be sufficient on our production servers
max_stack_depth = 2MB# vacuum will be done per hand between each test session
autovacuum = off# required to analyse the benchmark
log_min_duration_statement = 1000max_prepared_transaction = 100
# seems to be required to drop schema/roles containing large number of objects
max_locks_per_transaction = 128# I use the default for the bgwriter as I couldnt find recommendation on those
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round
#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round
#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round#WAL
fsync = on
#use default
#wal_sync_method# we are using 32 on our production system
wal_buffers=64# we didn't make any testing with this parameter until now, but this should'nt be a relevant
# point as our performance focus is on large transactions
commit_delay = 0#CHECKPOINT
# xlog will be on a separate disk
checkpoint_segments=256checkpoint_timeout = 5min
pgsql-performance by date: