calc: wrong calculation? would like a recheck,


increasing the urgency, i have reproduced the issue here cleanly within the ‘15 - 16 decimal digit accuracy range’ claimed by calc, as well as with old versions back to, as well on another system (linux with an older LO 7.1 alpha),

this eliminates almost all local influences, unless Intel Xeon or Lenovo have a very exotic bug,

since calc calculates more precisely for the status line - which calculates / rounds differently according to @erAck - i think ‘error in LO’,

and! the error is not ‘only rounded for display’, it is! affecting downstream calculations,

i’m actually quite sure about the bug, on the other hand it’s hard to believe that such an inaccuracy could survive undiscovered in calc for such a long time … to be sure that it is not some exotic fail here, or e.g. a special option that other users don’t set like i do, it needs external tests, please type in the example from the short screenshot and comment if you calculate ‘0’ or a somehow reasonable value,

thank you very much, you can save me hours of experimenting with that …


Miscalculations in calc? Request for a cross-check

i have had the impression for some time that calc sometimes miscalculates, when i complained about it it was mostly explained with ‘fp-calculations’, ‘rounding’, ‘calculations better than rounded display’ and the like …

  • the problems are ‘not so easy’ to catch and nail down because often hidden behind rounded display -

today i could narrow down two such strange things and think that they are rather bugs than ‘fp-shortcomings’, before i annoy developers unnecessarily i ask for a countercheck … error? or ‘the child is not squinting, it should look like this’? or are the values calculated better on other systems?

see sample file for own experimenting, - click to download - and

It would be nice if a sample document worked in any locale… i.e. here not used simple VALUE() of a string with decimal separator but NUMBERVALUE() instead that allows to specify the separator(s) used.

Anyway, you hit an effect of the approxAdd() function used to normally eliminate accuracy limitations when adding/subtracting not non-ambiguous representable integer values with opposite signs of similar magnitudes by scaling to 2^-48 and thus rounding off the least significant 4 bits to yield a result of 0.0, i.e. to let 0.3-0.2-0.1==0.0. For example, C13+B13+A13 is calculated as ((9007199254740986+6)±9007199254740986) which is ((9007199254740992)±9007199254740986) of which 9007199254740992 is not a non-ambiguous representable integer (is > 2^53-1). This maybe could be enhanced to check if the difference of the absolute values is larger than a certain value and then not fall into the scaling. However, note that the results for adding C+B+A in this example from row 13 on will never be as mathematically expected because all values C+B will exceed the non-ambiguous 2^53-1 integer and be in a precision of 2 (which is the “granularity ‘2’” you noted in yellow) for all values between 2^53 and 2^54-1. (and then a precision of 4 for the next magnitude, then 8, and so on).

Fwiw, the calculation in the first image is a bug based on the same cause of scaling too early. Note however that the result could be either 1.936875 or 1.9375 or 1.875, depending on the order of calculation as values between 2^49 and 2^50 have a precision of 0.125 and between 2^48 and 2^49 of 0.0625.

i wouldn’t get upset if someone corrects me … imho the idea of concealing deviations of one (to two?) ULP by cutting the last 4 bits, and thus up to 31 ULP is nonsense,
if!? the idea is to fix integer! calculation that way it should be limited to integers … which it is not see: 0,856090000000007 - 0,856090000000004 = 0
and if!? an extended sense is to fix irritating fp-add-errors then the application should be limited to finding better - shorter - decimal strings within +/- 1 (2?) ULP, because that is the magnitude of errors that occur when using dyadic approximations instead of decimals,
the (best known?) error ‘0.1 + 0.2 = 0.30000000000000000000004’ is in the last bit, namely that ~0011 at the end of the mantissa represents 0.3 better than ~0100 which results from the addition (weitz),
i could express the suspicion that such ‘intended inaccuracies’ help for some but are the cause of some other calculation errors or formula problems … ???
continued in next comment:

when it comes to fp-subtraction-errors for numbers of similar size (or addition with different signs) the situation becomes much more difficult and the ‘truncation approach’ is helpless imho, as the last bits are zeroed anyway by bit shifting because of ‘cancellation’ of the upper bits,
the error that 0.2001 - 0.2 = 9.99999999999890E-05 (calc) or 9.999999999998899E-5 (weitz) is correct to 17 decimal places but already ‘off’ after 12 significant decimal digits will not be corrected by a 4 bit tie to (dyadic) zero,
a better ‘decimal correct’ result would need more bits rather then less, one could search within the magnitude difference between arguments and result, 2^11 numbers in above example?, goodbye performance … and while you don’t check the original (decimal) arguments a ‘better’ result may still be ‘better wrong’?
i admit that fp calculations are not trivial, but i am not satisfied with the above results, @erAck: would you support a ‘bug’ for this?