Calc’s ROUND is implemented using RoundNumber, and it in turn uses rtl_math_round.
The latter first multiplies the passed number by the relevant power of 10. In this case, it will be 10^16
, which has 37 significant bits in mantissa. The multiplied value (~0,30000000000000004
) has 52 significant bits in its mantissa, and naturally, multiplication increases the number of significant positions (since neither number is a whole power of 2). And of course, the operation introduces own inaccuracy (since there’s no way to store more than 52 significant bits of the result in mantissa). The end result of this is exactly 3000000000000000.5
.
Then, since the result is ~3e15, which is smaller than 2^52, the value is incremented by 0.5, and rounded using rtl::math::approxFloor
. The latter uses rtl_math_approxValue, which doesn’t do anything in this case, so the value increased by 0.5 (i.e., exactly 3000000000000001
) simply gets floored. The result (still 3000000000000001
) is divided by 10^16
. This gives ~0,30000000000000009992
.
(By the way: I prefer Base Convert: IEEE 754 Floating Point as a more handy tool to look into the binary representation of double.)
I don’t know if there’s a more accurate method to round to specified number of decimals. I hope there is; and I can imagine that Excel could use it. However, I see a usual logical fallacy in the question:
this only works in Excel, which ‘fixes’ the value even if you round to 25 decimal places (the error should actually be preserved)
…
it would be nice if this would work better … and it is possible, see Excel.
The text assumes that Excel indeed is able to do that properly. But nowhere in the text there is anything that proves that. The assumption is based on a single data point. Any random result that happens closer in one application than in another does not tell anything about the accuracy of the method used in either of them. You can only tell if you perform a systematic test, having a uniformly distributed set of numbers without a common pattern (e.g., numbers with very different number of significant decimals must be tested). E.g., Excel might simply use 15 as its maximal decimals count, or (more flexibly) a value that does not exceed 15 significant digits of the passed value.
But yes, if someone suggests the algorithm, it would be nice; even if someone simply shows that this is possible using doubles (by doing proper testing of Excel), it would already be a great result.