Re: Question regarding indices - Mailing list pgsql-sql
From | Steve |
---|---|
Subject | Re: Question regarding indices |
Date | |
Msg-id | 20100911172332.240150@gmx.net Whole thread Raw |
In response to | Re: Question regarding indices (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: Question regarding indices
|
List | pgsql-sql |
-------- Original-Nachricht -------- > Datum: Sat, 11 Sep 2010 11:04:16 -0400 > Von: Tom Lane <tgl@sss.pgh.pa.us> > An: "Steve" <steeeeeveee@gmx.net> > CC: pgsql-sql@postgresql.org > Betreff: Re: [SQL] Question regarding indices > "Steve" <steeeeeveee@gmx.net> writes: > > I have a small question about the order of values in a query. Assume I > have a table with the following fields: > > uid INT, > > data BIGINT, > > hits INT > > > And an unique index on (uid, data). I use libpq C API to query data from > the table. The query is something like this: > > SELECT uid,data,hits FROM mytable WHERE uid=2 AND data IN > (2033,2499,590,19,201,659) > > > Would the speed of the query be influenced if I would sort the data? I > can imagine that just querying a bunch of bigint would not make a big > difference but what about several thousand of values? Would sorting them and > sending the SQL query with ordered data influence the speed of the query? > > It's unlikely to make enough difference to be worth the trouble. > Making a quick sort is ultra easy in C. Anyway... is there a difference in the speed of the query with pre-sorted valuesor not? If there is one then I will go and sort the values. Right now I have a quick sort implemented but I will probablydo a lazy quick sort and then a final insertion sort, because insertion is faster on a slightly ordered dataset thanquick sort. Probably I am pulling hairs here about the speed but I really want to minimize the time it needs for PostgreSQL to returnthe data. I personally am happy with the speed when using PostgreSQL but the application I use has an MySQL drivertoo and I got some users claiming that MySQL is faster than PostgreSQL, transfers less data over the wire, etc... andI want to optimize the PostgreSQL part to be on the same level as the MySQL part. So everything that helps to squeezethe last nanosecond out of the PostgreSQL part is welcome. I already switched to binary transmission in order to minimizethe data send over the wire when using PostgreSQL and I have added an function to do +/- what the MySQL proprietary"INSERT ON DUPLICATE KEY UPDATE" does. I hate when users compare apples with oranges but what can I do? You cannot explain them that PostgreSQL is different and more standard compliant and that the chance to loose data is lower withPostgreSQL then with all that what MySQL is doing (MyISAM tables, etc...). It's pointless to explain to them. It's liketrying to explain a mole how the sun is shining. So all I want is to explore the available capabilities of PostgreSQL to get the best out of the libpq engine as possible.If you have any recommendation at what I should look in order to get better speed, then let me know. > regards, tom lane > // Steve -- Achtung Sicherheitswarnung: GMX warnt vor Phishing-Attacken! http://portal.gmx.net/de/go/sicherheitspaket