Re: MySQL or Postgres ? - Mailing list pgsql-general
From | antti@exadata.fi (Antti Halonen) |
---|---|
Subject | Re: MySQL or Postgres ? |
Date | |
Msg-id | 74d8e7dd.0208010839.3ef505d9@posting.google.com Whole thread Raw |
Responses |
Re: MySQL or Postgres ?
Re: MySQL or Postgres ? |
List | pgsql-general |
Fabiàn R. Breschi <fabian.breschi@acm.org> wrote in message news:<3D48185C.B6663ED7@acm.org>... > Hi there, > > If you'd have to suggest which type of database one migh use, how would > you reflect the differences between them to show up pros and cons ? Some thoughts I posted last night to another group under basically same subject: MySQL has pretty robust replication and multiple ways to handle backups. Actually you now got hotbackup for MySQL if you run on innodb tables (www.innodb.com), which you probably should, as it offers transactions, row-level locking and rough foreign key implementation. Hotbackup means that you can backup your live database on the fly. AFAIK, Postgre don't have replication. Great Bridge gang was doing something but it never happend I think. For basic backups both of them should be fine. Note that with Postgre, if you get lot's of updates you have to run vacuum to clean up the transaction mess, and this basically halts the db. On some situations MySQL is slower, like some multi-join queries and particularly when running on MyISAM tables, in situations when select's and updates are coming in on a same table in a steady flow. This is because MyISAM handles locking on a _table_ level, which is not that nice and makes MySQL not so scalable on those situations. Now again if you instead use Inno tables you have no problems because of the row locking. Personally, I have not used Postgre in really high-traffic situations, but on some situations it simply is better because of it's feature set. Data integrity specifically being one of them. In many cases the most important thing is INTEGRITY on a server level. Performance comes next. And no matter what some MySQL idealists say; in some cases you simply cannot impliment integrity checks on a client-level (like making your client do say, foreign key checks). What if you have many different type of client applications modifying the same data? What if you have to use some GUI client where nothing can be implimented? Whatever. Besides, that's inheritently wrong thinking anyways. It all boils down what is important to _you_. Most of the database implementations are unique so making comparisons is pointless. Do you need foreign keys? Do you need sub-selects? No? Stored procedures? Do you need speed on bulk inserts? Speed on certain select's? If you are running website and clients are doing the same queries most of the time, then maybe query cache might help? Bottom line: MySQL - Fast & reliable - has proven this in real-world situations - Capable of handling big tables efficiently. And this is so no matter what some whiners say. I've seen 200G _single_ table. Many warehousers running with 10(ns) of gig tables without any problems - Lacking many major features: stored procs, foreign keys, sub-selects... - Some neat features like: fulltext indexing, replication, query cache.. - Library version of the server! You can actually link it inside your app (no admin worries, no client-server overhead). Postgre - Tons of features - Even more features - And then some... - Seems to be stable and scale nicely - More that Oracle like industry db feeling :) - If you want to extend / modify the server code, it's very well commented = easy to read :) - No replication, no fulltext index, no query cache Hopefully this is for any help! Best, Antti Halonen # Tietokanta & Tietoverkko Konsultointia Exadata # Database & Network Consulting http://www.exadata.fi
pgsql-general by date: