Thread: The database is very slow !
I currently have PostgreSQL 7.1 installed on a server with about 700 Mb of RAM. I have many problems of speed with a database I created. For example, it took almost 12 sec to run the query "select * from table" directly from PostgreSQL, on a table with 4000 records and 60 fields ... And the whole application built on this database is very very slow (some pages take almost 20 seconds to load !) I verifed the indexes, I think they are ok, and I tried to make my queries as short as possible (without select * but with select field1, field2, ...) But anyway, I guess there is a problem of speed directly with the database, because I think that is not normal to need 12 sec to run a query on a table with only 4000 records ... Has anybody an idea ? Thanks Krystoffff
proghome@silesky.com (krystoffff) writes: > I have many problems of speed with a database I created. For example, > it took almost 12 sec to run the query "select * from table" directly > from PostgreSQL, on a table with 4000 records and 60 fields ... When did you last VACUUM this table? regards, tom lane
Thanks for your answers I just vacuumed the database just before a test, and it didn't change anything ... Sorry not to have mentionned it Any other suggestions ?
Hi Kystof, I haven't read the other suggestions but I'm running a db where some tables and views have over a million records without any issues on similiar hardware. Out of curiosity - have you actually performance tuned your OS and the postmaster (trhough postgres.conf)? If you haven't done any of these then I wouldn't be surprised if the db was slow - i think the default install of most linux distros leaves you with 32mb shared memory which makes your 700mb of RAM useless. As far as 60 fields are concerned I doubt that would be a problem although I've never used a table with more than 20 - anybody out there know if number of fields on a table can create performance issues? Lastly - how complicated are your indexes? If you have indexed every field on that table then that could be an obvious issue. How many fields from the table have foreign key constraints? Let me know how you go, Jason On Wed, 13 Aug 2003 11:22 pm, krystoffff wrote: > Thanks for your answers > > I just vacuumed the database just before a test, and it didn't change > anything ... Sorry not to have mentionned it > > Any other suggestions ? > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>postmaster (trhough postgres.conf)? If you haven't done any of these then I >wouldn't be surprised if the db was slow - i think the default install of >most linux distros leaves you with 32mb shared memory which makes your 700mb >of RAM useless. As far as 60 fields are concerned I doubt that would be a >problem although I've never used a table with more than 20 - anybody out >there know if number of fields on a table can create performance issues? > > I'm having performance issues also. High load averages on all postmaster processes. I performed 2 tests: 1) lowered the number of fields/columns from 273 to 60. No affect. 2) changed from adding a new record/row once per second to every other second. Load average dropped 30-40% I plan on trying to up the shared memory from 32Meg to 256Meg today. --Chris