Re: What popular, large commercial websites run - Mailing list pgsql-general
From | Shaun Thomas |
---|---|
Subject | Re: What popular, large commercial websites run |
Date | |
Msg-id | Pine.LNX.4.44.0205011417570.16874-100000@hamster.lee.net Whole thread Raw |
In response to | Re: What popular, large commercial websites run (pgsql-gen Newsgroup (@Basebeans.com) <pgsql-gen@basebeans.com>) |
Responses |
Re: What popular, large commercial websites run
Re: What popular, large commercial websites run |
List | pgsql-general |
On Mon, 29 Apr 2002, pgsql-gen Newsgroup wrote: > The way I see it, some managers will buy Oracle. They will have low > profit margines. Some programers will use PostgreSQL. They will have > high margins. That's all well and good provided postgres and Oracle were 100% feature compatible. They're not. Want inter-database queries? Too bad. Replication? Nope. Parallel queries? Scratch that. Packages? Big goose-egg. ADA-style error catching? Zero. In/Out variables? Only in your dreams, buddy. Views that take parameters? Zilch. Wanna actually drop your foreign keys, or change their status or triggering order for loading purposes? Not here. Are these issues being addressed? Sure they are. But I think I've pretty much proven that Oracle has things Postgres doesn't, and can do things Postgres can't. If you're a large corporation that needs replication, parallel database queries, use of rollback segments instead of MVCC (to avoid tainting the datafiles with invalid/old data), you don't have a choice. It's either Oracle or DB2, really. The fact is, we've used postgres for about 5 years now. I'm recommending migrating off of it at this very moment. Why? MVCC. When I finally got sick of doing a full database dump and restore every month, and a full vacuum every two hours to avoid rampant datafile growth, I made the official decision to ditch Postgres. Why are our databases bloating, even after hourly full vacuums? Because we have a database with a 50-100% data turnover rate at about 100,000 rows, and postgres just can't handle it. I've watched our 100mb database grow to 500mb, then 2gigs. Full dump and restore? 70mb again. Oh, and the spiking load, and table locks that occur during full vacuums? Just take the hit, web-surfers be damned. For us, Oracle keeps live statistics on the data, realtime. No analyze. It also uses rollback segments to serve old versions of data when locks are present, instead of MVCC. MVCC leaves the old version of the row *in* the table data, right next to unchanged rows, making vacuum necessary to clean up, and point to newest row versions without a sequence scan. Rollback segments just put the old versions in the segment, if things change, they reapply the data, and no harm done. No datafile growth. No old versions. No table scans to find valid rows, no vacuums. Does it cost more? Sure. But until Postgres can solve these important problems, we have no other choice; regardless of how much we want to go the cheaper route. It's not always about money. -- +-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+ | Shaun M. Thomas INN Database Administrator | | Phone: (309) 743-0812 Fax : (309) 743-0830 | | Email: sthomas@townnews.com AIM : trifthen | | Web : www.townnews.com | | | | "Most of our lives are about proving something, either to | | ourselves or to someone else." | | -- Anonymous | +-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
pgsql-general by date: