Re: Adding IEEE 754:2008 decimal floating point and hardware support for it - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Re: Adding IEEE 754:2008 decimal floating point and hardware support for it |
Date | |
Msg-id | 23128.1370997322@sss.pgh.pa.us Whole thread Raw |
In response to | Adding IEEE 754:2008 decimal floating point and hardware support for it (Craig Ringer <craig@2ndquadrant.com>) |
Responses |
Re: Adding IEEE 754:2008 decimal floating point and hardware
support for it
Re: Adding IEEE 754:2008 decimal floating point and hardware support for it |
List | pgsql-hackers |
Craig Ringer <craig@2ndquadrant.com> writes: > Currently DECIMAL is an alias for NUMERIC, Pg's built-in arbitrary > precision and scale decimal type. I'd like to explore the possibility of > using hardware decimal floating point support in newer processors, > compilers and C libraries to enhance DECIMAL / NUMERIC performance. As near as I can tell, there is no such hardware support. The Intel paper you reference describes a pure-software library, and states "A software implementation was deemed sufficient for the foreseeable future". The source code for that library is apparently available under a liberal license. It might be more useful to eyeball what they did and see if we can learn anything towards speeding up the existing variable-precision NUMERIC type. > The main thing I'm wondering is how/if to handle backward compatibility > with the existing NUMERIC and its DECIMAL alias, or whether adding new > DECIMAL32, DECIMAL64, and DECIMAL128 types would be more appropriate. > I'd love to just use the SQL standard types name DECIMAL if possible, > and the standard would allow for it (see below), but backward compat > would be a challenge, as would coming up with a sensible transparent > promotion scheme from 32->64->128->numeric and ways to stop undesired > promotion. Indeed. I think you're basically between a rock and a hard place there. It would be very very difficult to shoehorn such types into the existing numeric hierarchy if you wanted any sort of transparency of behavior, I fear. On the other hand, I doubt that it's going to work to make the existing numeric type switch to the "hardware" representation for suitably-constrained columns, because what are you going to do when, say, the result of an addition overflows the hardware width? You can't just throw an error immediately, because you won't know whether the output is supposed to be getting shoved back into a limited-width column or not. And on top of that, you have the very strong likelihood that the "hardware" implementation(s) won't behave exactly like our existing NUMERIC routines --- for instance, I'd bet a nickel that Intel took more care with last-place roundoff than our code does. So now we would have not just backwards-compatibility worries, but platform-dependent results for a data type that didn't use to have any such issue. I think people who expect NUMERIC to be exact would really get bent out of shape about that idea. On the whole, I think the effort would be a lot more usefully spent on trying to make the existing NUMERIC support go faster. regards, tom lane
pgsql-hackers by date: