Thread: Processor speed relative to postgres transactions per second
We have two camps that think that the speed of cpu processors is/aren't relative to the number of transactions that postgres that can performed per second.
I am of the opinion that is we throw the faster processors at the database machine, there will be better performance.
Just like faster drives and controllers, there must be some improvement over the other processor.
Is there anything to support this, a document or someone's personal experience?
Chrs Barnes
IM on the go with Messenger on your phone. Try now.
On Mar 29, 2010, at 9:42 AM, Chris Barnes wrote: > > We have two camps that think that the speed of cpu processors is/aren't relative to the number of transactions that postgresthat can performed per second. > > I am of the opinion that is we throw the faster processors at the database machine, there will be better performance. > > Just like faster drives and controllers, there must be some improvement over the other processor. > > Is there anything to support this, a document or someone's personal experience? > There will always be a bottleneck. If your query speed is limited by the time it takes for the drives to seek, then you canthrow as much CPU at the problem as you like and nothing will change. If your query speed is limited by the time it takesto read data from memory, a faster CPU will only help if it has a faster memory bus. If you're limited by complex orslow functions in the database then a faster CPU is what you need. For larger databases, IO speed is the bottleneck more often than not. In those cases throwing memory, better disk controllersand faster / more drives at them will improve things. More CPU will not. Also, the price/speed curve for CPUs is not pretty at the higher end. You can get a lot of RAM or disk for the price differencebetween the fastest and next fastest CPU for any given system. Cheers, Steve
On Mon, Mar 29, 2010 at 11:00 AM, Steve Atkins <steve@blighty.com> wrote: > For larger databases, IO speed is the bottleneck more often than not. In those cases throwing memory, better disk controllersand faster / more drives at them will improve things. More CPU will not. We're in the situation where we are CPU bound on a dual 4 core 2.1GHz opteron, and IO wait is never more than one CPU's worth (12%). That's on the slony source server. The destination servers are even more CPU bound, with little or no IO wait. The RAID array is a RAID-10 with 12 drives, and a RAID-1 with two for pg_xlog. The RAID-1 pair is running at about 30 megabytes per second written to it continuously. It can handle sequential throughput to about 60 megabytes per second. Of course, if we put more CPU horsepower on that machine, (mobo replacement considered) then I'm sure we'd start getting IO bound, and so forth. > Also, the price/speed curve for CPUs is not pretty at the higher end. You can get a lot of RAM or disk for the price differencebetween the fastest and next fastest CPU for any given system. Agreed. The curve really starts to get ugly when you need more than 2 sockets. Dual socket 6 and 8 core cpus are now out, and not that expensive. CPUs that can handle being in a 4 to 8 socket machine are two to three times as much for the same basic speed. At that point it's a good idea to consider partitioning your data out into some logical manner across multiple machines.
Recently I ran a set of tests on two systems: a 4-core server with 5 disks (OS + WAL + 3 for DB) on a battery backed disk controller, and a newer Hyper-threaded design with 4 physical cores turning into 8 virtual ones--but only a single disk and no RAID controller, so I had to turn off its write cache to get reliable database operation. (See http://www.postgresql.org/docs/current/interactive/wal-reliability.html ) When running pgbench with its simple built-in SELECT-only test, on a tiny data set that fits in RAM, I went from a peak of 28336 TPS on the 4-core system to a peak of 58164 TPS on the 8-core one. On the default write-heavy test, the 4-core server peaked at 4047 TPS. The 8-core one peaked at 94 TPS because that's as fast as its single disk could commit data. The moral is that a faster processor or more cores only buys you additional speed if enough of your data fits in RAM that the processor speed is the bottleneck. If you're waiting on disks, a faster processor will just spin without any work to do. You can't answer "will I get more transactions per second?" without specifying what your transaction is, and knowing what the current limiter is. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.us
On Mon, Mar 29, 2010 at 12:42 PM, Chris Barnes <compuguruchrisbarnes@hotmail.com> wrote: > > We have two camps that think that the speed of cpu processors is/aren't > relative to the number of transactions that postgres that can performed per > second. > > I am of the opinion that is we throw the faster processors at the database > machine, there will be better performance. which tastes better, a round fruit or a oval fruit? :-). postgres can become i/o bound or cpu bound depending on the application, or specific things you are doing. if your application is highly latency sensitive, then more cpu power is always nice. cpu and i/o have completely different cost/performance scaling metrics: cpu is very cheap to scale up to a point (when you hit limits of x86 at current levels) then becomes extremely expensive. cpu bound problems tend to degrade relatively well when your limit is hit. i/o is expensive to scale but has relatively linear relationship between cost and performance. i/o bottleneck can bring your server to a crawl, and sometimes comes out of nowhere when you nudge the work the db has to do just a hair exceeding your system's ability to cope. merlin