Re: Temporal Databases - Mailing list pgsql-general
| From | Rodrigo Sakai | 
|---|---|
| Subject | Re: Temporal Databases | 
| Date | |
| Msg-id | 003001c642c8$e2fd47a0$1401a8c0@netsis01xp Whole thread Raw  | 
		
| In response to | Temporal Databases ("Rodrigo Sakai" <rodrigo.sakai@poli.usp.br>) | 
| Responses | 
                	
            		Re: Temporal Databases
            		
            		 | 
		
| List | pgsql-general | 
Ok, but actually I'm not concerned about logging old values. I'm concerned
about checking temporal constraints. Entity Integrity (PK) and Referential
Integrity (FK).
 For example, if you have the salary table:
Salary (employee_id, salary, start_date, end_date)
Where [star_date, end_date] is an interval. Means that the salary is (was)
valid in that period of time.
I have to avoid this occurrence:
      001
      1000
      2005-20-01
      2005-20-12
      001
      2000
      2005-20-06
      2006-20-04
So, is needed to compare intervals, not only atomic values. If you want to
know which was the salary on 2005-25-07, is not possible. It is
inconsistent!!!
Of course I can develop some functions and triggers to accomplish this work.
But the idea is to keep simple for the developers, just simple as declare a
primary key!
Thanks for your attention!!
----- Original Message -----
From: "Brad Nicholson" <bnichols@ca.afilias.info>
To: "Simon Riggs" <simon@2ndquadrant.com>
Cc: "Rodrigo Sakai" <rodrigo.sakai@poli.usp.br>; "Michael Glaesemann"
<grzm@myrealbox.com>; <pgsql-general@postgresql.org>
Sent: Friday, February 24, 2006 1:56 PM
Subject: Re: [GENERAL] Temporal Databases
> Simon Riggs wrote:
>
>>A much easier way is to start a serialized transaction every 10 minutes
>>and leave the transaction idle-in-transaction. If you decide you really
>>need to you can start requesting data through that transaction, since it
>>can "see back in time" and you already know what the snapshot time is
>>(if you record it). As time moves on you abort and start new
>>transactions... but be careful that this can effect performance in other
>>ways.
>>
>>
>
> We're currently prototyping a system (still very much in it's infancy)
> that uses the Slony-I shipping mechanism to build an off line temporal
> system for point in time reporting purposes.  The idea being that the log
> shipping files will contain only the committed inserts, updates and
> deletes.  Those log files are then applied to an off line system which has
> a  trigger defined on each table that re-write the statements, based on
> the type of statement, into a temporally sensitive format.
>
> If you want to get an exact point in time snapshot with this approach, you
> are going to have to have timestamps on all table in your source database
> that contain the exact time of the statement table.  Otherwise, a best
> guess (based on the time the slony sync was generated) is the closest that
> you will be able to come.
>
> --
> Brad Nicholson  416-673-4106   Database Administrator, Afilias Canada
> Corp.
>
>
>
		
	pgsql-general by date: