Thread: too slow
I recently migrated from MySql, The database size in mysql was 1.4GB (It is a static database). It generated a dump file (.sql) of size 8GB), It took 2days to import the whole thing into postgres. After all the response from postgres is a disaster. It took 40sec's to run a select count(logrecno) from sf10001; which generated a value 197569. And It took for ever time to display the table. How to optimize the database so that I can expect faster access to data. each table has 70 colsX197569 rows (static data), like that I have 40 tables, Everything static. system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0 thanks a million in advance, shashi. -- Shashi Kiran Reddy. Gireddy, Graduate Assistant, CBER, University of Alabama. http://www.cs.ua.edu/shashi Home: 205-752-5137 Cell: 205-657-1438
On Tue, 8 Feb 2005 11:51:22 -0600, Shashi Gireddy <gireddy@gmail.com> wrote: > I recently migrated from MySql, The database size in mysql was 1.4GB > (It is a static database). It generated a dump file (.sql) of size > 8GB), It took 2days to import the whole thing into postgres. After all > the response from postgres is a disaster. It took 40sec's to run a > select count(logrecno) from sf10001; which generated a value 197569. > And It took for ever time to display the table. How to optimize the > database so that I can expect faster access to data. > > each table has 70 colsX197569 rows (static data), like that I have 40 > tables, Everything static. > > system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0 A couple things... 1. Did you run VACCUM ANALYZE on the database after importing? 2. Did you use COPY or INSERT to get data into the database? THere's an almost 100x difference. 3. What does the schema look like. count(foo) is an expensive operation on PostgreSQL (and Oracle), because of the multi-version system, so it's something that you shouldn't use if you can avoid it. Other than that, I've found PostgreSQL to be much faster than MySQL when under load, and when mutliple people are involved, especially if there's a writer anywhere. Chrs -- | Christopher Petrilli | petrilli@gmail.com
Shashi Gireddy wrote: > I recently migrated from MySql, The database size in mysql was 1.4GB > (It is a static database). It generated a dump file (.sql) of size > 8GB), It took 2days to import the whole thing into postgres. ] Well that sounds like you don't have your postgresql.conf configured to properly utilize your system. > After all > the response from postgres is a disaster. It took 40sec's to run a > select count(logrecno) from sf10001; Are you running a RAID? > which generated a value 197569. > And It took for ever time to display the table. How to optimize the > database so that I can expect faster access to data. Have you analyzed? > > each table has 70 colsX197569 rows (static data), like that I have 40 > tables, Everything static. > Sincerely, Joshua D. Drake Command Prompt, Inc. 503-667-4564 > system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0 > > thanks a million in advance, > shashi. -- Command Prompt, Inc., your source for PostgreSQL replication, professional support, programming, managed services, shared and dedicated hosting. Home of the Open Source Projects plPHP, plPerlNG, pgManage, and pgPHPtoolkit. Contact us now at: +1-503-667-4564 - http://www.commandprompt.com