Thread: Performance Question - Table Row Size
Hi, I am designing my database and I was wondering whether my table row size effects the performance of querying my table. Please note that my table is being designed to hold high volume of records and I do not plan to do (select *) for retrieving them. That is I plan to only query a few of those fields at a given time but each row contains significantly more data that are not being queried at the time. Thanks, Mike
On 7/9/07, Mike <akiany@gmail.com> wrote: > I am designing my database and I was wondering whether my table row > size effects the performance of querying my table. Please note that my > table is being designed to hold high volume of records and I do not > plan to do (select *) for retrieving them. That is I plan to only > query a few of those fields at a given time but each row contains > significantly more data that are not being queried at the time. Obvious observation: Since PostgreSQL's unit of data storage is the page, selects -- even on single attributes -- result in entire pages being loaded into memory and then read. Since the cache (PostgreSQL's shared buffers plus the OS file system cache) holds pages, not individual attributes, more data per tuple (row) means fewer tuples to fit in the cache. As far as the CPU cache goes, as I understand it, the fact that you're reading just a few attributes from each tuple (maybe even just a few from each page) is inefficient -- you will be forcing lots of data into the cache that is never used. In general, you might be better off normalizing your table, if possible, or partitioning it into subtables. But these are the broad strokes -- how many columns are we talking about exactly, and of what data types? Alexander.
"Mike" <akiany@gmail.com> writes: > I am designing my database and I was wondering whether my table row > size effects the performance of querying my table. yes If your table is large and you're reading all the rows then you'll be limited by the i/o rate. If your rows are twice as big it will take twice as much i/o to read and it will take twice as long. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com
I see. Thank you for the elaborate response. I have a clearer idea of what is going on now. In designing my application I was thinking of storing pieces of my data as serialized python data structures into a binary field (no more than 15KB), while a friend was arguing I should store the data in other tables and relate the tables together. He was arguing storing binary data on a table, even though, it is not queried slows down other queries and with this. Thanks again, Mike
Mike <akiany@gmail.com> writes: > I see. Thank you for the elaborate response. I have a clearer idea of > what is going on now. In designing my application I was thinking of > storing pieces of my data as serialized python data structures into a > binary field (no more than 15KB), while a friend was arguing I should > store the data in other tables and relate the tables together. He was > arguing storing binary data on a table, even though, it is not queried > slows down other queries and with this. A 15KB column value is going to be stored out-of-line in the TOAST table anyway, so your table tuple will just contain a pointer to it, which isn't very big. If you don't use that column value in a given query its effect will be minimal. -Doug