Re: Storing sensor data - Mailing list pgsql-performance

From Nikolas Everett
Subject Re: Storing sensor data
Date
Msg-id d4e11e980905280638j41f25d0fq594e181e36cd0d62@mail.gmail.com
Whole thread Raw
In response to Re: Storing sensor data  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
Responses Re: Storing sensor data
List pgsql-performance
Option 1 is about somewhere between 2 and 3 times more work for the database than option 2.

Do you need every sensor update to hit the database?  In a situation like this I'd be tempted to keep the current values in the application itself and then sweep them all into the database periodically.  If some of the sensor updates should hit the database faster, you could push those in as you get them rather than wait for your sweeper.  This setup has the advantage that you c0an scale up the number of sensors and the frequency the sensors report without having to scale up the disks.  You can also do the sweeping all in one transaction or even in one batch update.


On Thu, May 28, 2009 at 9:31 AM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote:
Ivan Voras wrote:
The volume of sensor data is potentially huge, on the order of 500,000
updates per hour. Sensor data is few numeric(15,5) numbers.

Whichever design you choose, you should also consider partitioning the data.


Amen.  Do that.

pgsql-performance by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: Storing sensor data
Next
From: Alexander Staubo
Date:
Subject: Re: Storing sensor data