Re: SSL over Unix-domain sockets - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Re: SSL over Unix-domain sockets |
Date | |
Msg-id | 7221.1200349254@sss.pgh.pa.us Whole thread Raw |
In response to | Re: SSL over Unix-domain sockets (Peter Eisentraut <peter_e@gmx.net>) |
Responses |
Re: SSL over Unix-domain sockets
Re: SSL over Unix-domain sockets |
List | pgsql-hackers |
Peter Eisentraut <peter_e@gmx.net> writes: > It has been reported that the data transmission overhead is much less > than the connection establishing overhead, which is measured here. > But this is certainly not an encouraging measurement, if we want to > put this close to the default path of use. I did some more experiments to confirm Peter's results. My test case for measuring connection overhead ispgbench -c 1 -t 1000 -S -n -C bench (ie, single client, SELECT-only transaction, connecting again for each transaction). This is marginally more realistic than Peter's test since the client executes a SQL command per connection. I get $ PGSSLMODE=prefer time pgbench -c 1 -t 1000 -S -n -C bench transaction type: SELECT only scaling factor: 10 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 33.078772 (including connections establishing) tps = 33.078772 (excluding connections establishing) 10.45user 0.68system 0:30.26elapsed 36%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+48465minor)pagefaults 0swaps $ PGSSLMODE=disable time pgbench -c 1 -t 1000 -S -n -C bench transaction type: SELECT only scaling factor: 10 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 156.237184 (including connections establishing) tps = 156.237208 (excluding connections establishing) 0.20user 0.18system 0:06.41elapsed 6%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+2500minor)pagefaults 0swaps $ PGSSLMODE=prefer time pgbench -c 1 -t 1000 -S -n -C -h localhost bench transaction type: SELECT only scaling factor: 10 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 32.320773 (including connections establishing) tps = 32.320774 (excluding connections establishing) 10.54user 1.01system 0:30.97elapsed 37%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+49807minor)pagefaults 0swaps $ PGSSLMODE=disable time pgbench -c 1 -t 1000 -S -n -C -h localhost bench transaction type: SELECT only scaling factor: 10 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 144.859620 (including connections establishing) tps = 144.859641 (excluding connections establishing) 0.32user 0.62system 0:06.91elapsed 13%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+4512minor)pagefaults 0swaps I also did some tests to measure the encryption overhead for bulk data, in the form of pg_dumping a medium-size table (which is in fact just the data from the regression test's tenk1 table, repeated 128 times): [tgl@rh2 ~]$ PGSSLMODE=prefer time pg_dump -t foo regression | wc 2.71user 0.36system 0:25.09elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+1093minor)pagefaults 0swaps 1280054 20480136 85863449 [tgl@rh2 ~]$ PGSSLMODE=disable time pg_dump -t foo regression | wc 0.64user 0.30system 0:09.63elapsed 9%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+793minor)pagefaults 0swaps 1280054 20480136 85863449 [tgl@rh2 ~]$ PGSSLMODE=prefer time pg_dump -t foo -h localhost regression | wc 3.06user 0.45system 0:25.82elapsed 13%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+1105minor)pagefaults 0swaps 1280054 20480136 85863449 [tgl@rh2 ~]$ PGSSLMODE=disable time pg_dump -t foo -h localhost regression | wc 0.66user 0.42system 0:09.91elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+806minor)pagefaults 0swaps 1280054 20480136 85863449 Note that these times are for SSL enabled, but without any root.crt files, so no actual authentication is happening --- I'm not sure how much more connection-time overhead that would incur. Presumably the bulk transfer rate wouldn't change though. All these numbers are stable to within a percent or three over repeated trials. Conclusions: * SSL, even without real authentication, is *way* too expensive to enable by default. * The extra cost of going across a local TCP connection is measurable, but it's insignificant compared to the cost of turning on SSL. (This is on a Fedora 8 kernel BTW ... that result might vary on other platforms.) So you could make a pretty good case that the answer for DBAs who want to prevent spoofing is to disable socket connections in pg_hba.conf and force even local connections to come through "hostssl" connections. If we do want to apply Peter's patch, I think it needs to be extended so that the default behavior on sockets is the same as before, ie, no SSL. This could be done by giving libpq an additional connection parameter, say "socketsslmode", having the same alternatives as sslmode but defaulting to "allow" instead of "prefer". regards, tom lane
pgsql-hackers by date: