Well the trouble is here we are using binary digital computers. An analogue computer might fare better, or a digital computer that works in decimal.Heater wrote: ↑Sun Jun 24, 2018 12:10 pmThen there is the whole floating point fiasco. To quote the common example in JS:Which is not a JS problem as such, it's common to all languages using IEEE floats.Code: Select all
> 0.3 + 0.3 + 0.3 == 0.9 false
0.3 cannot be represented in binary, we get 0x1.3333333333333p-2
You can see it is recurring and would require infinite precision to represent all of it. That cannot happen, so it doesn't add up.
We now have Decimal floating point in C and YES!! it works 0.3+0.3+0.3==0.9!!!!!!!!!!!!!!
Code: Select all
int
main( int argc, const char *argv[] )
{
_Decimal64 a, b, c, result;
a = 0.3DD;
b = 0.3DD;
c = 0.3DD;
result = 0.9DD;
printf( "a+b+c==result is %d\n", a+b+c == result );
}
This might be good for "currency" arithmetic which is always a problem in binary.
It is part of IEEE 754-2008 and IEC 60559.
In C++ this is "std::decimal::decimal64" or "__decfloat64" I am not sure which.