calc: rounding: general precision problem?

since they are not guaranteed to be present on all its supported platforms.
understand, thought of that, but that blocks progress. what about using them and if not available display a warning ‘your hardware is not sufficient for best precision’?
different results … that would be not interoperable and not consistent.
yeahh, that’s exact what i’m saying about displaying different values than calculating with.
Which would introduce a systematic error towards negative infinity.
yeah, and … just ‘brainstorming’ … that would be ok for values where the ‘value used for representation’ e.g. 0.299999999999999988898 is below the ‘less decimals’ ‘value to be represented’ - 0.3000000000000000
and would be needed the other way around for values where the exact value of the ‘representer’ is above the represented ‘less decimals’ value as for e.g. 0.1000000000000000 <-> 0.100000000000000005551 or 0.2000000000000000 <-> 0.200000000000000011102 …
but how to detect?

but that blocks progress

No, that doesn’t. Using that would only shift the problem to higher bits, not eliminate the cause. Additionally, that’s not an actual problem that prevents efficient work of the program, and it only surprises people on first encounter; billions of documents are created and used in spreadsheet software each day, and they work as intended. The perceived problem only feels that big by tiny fraction of users. E.g., you already have spend several years trying to “improve” that without any positive result IIUC.

but how to detect?

Exactly, that’s absolutely impossible. When you have a value, you don’t have any way to know if that’s some value that is exact, or is rounded from a smaller decimal value, or from larger. The binary number is just a binary number, nothing more.

mh … don’t like ‘impossible’ … much effort - yes, stretching my capabilities - yes, impossible - no,
mostly when someone has a result he also knows about the calculation it evolved from, e.g. 0,1 + 0,2 shouldn’t have more than one decimals digits,
if i have a value in a range where max 16 decimal digits are accurate (0,3) and the value calculates to ~0,300000000000000044409 rounding should be ok, problem is to get it working correctly,
found a formula for gnumeric which is a little less cryptic:

=value(text(A6,if(2^(-52+int(log(abs(A6),2)))<10^(-15+int(log(abs(A6),10))),"0.000000000000000E+000","0.00000000000000E+000")))  

that does better rounding than ‘normal’ round as long as the deviation in the result / value doesn’t exceed 0,5 ‘decimal-ULPs’ for that range, unfortunately it does not work in calc because the digits from 15 are not displayed and what you can work out with rawsubtract is falsified by the additional deviation of the representation of 0.3 … :frowning:

when someone has a result he also knows about the calculation it evolved from

The “someone” is Calc, and it does not know which calculation a random number comes from (and even if it tried, all it could know would be “I need to ROUND a number resulting from summation of 0.200000000000000011102230246251565404236316680908203125 and 0.1000000000000000055511151231257827021181583404541015625”, which are the only numbers it has). A number may come from a complex formula, or from a UDF, or from a user input. It is not possible to know how precise the result must be. There is a word impossible; and you ignore that, still repeating the “idontbelieveyouyouallareliersiamtheonlyonewhoknowsthetruth!!!111111” mantra. But - well, not surprising.

@mikekaganski:
imho progress evolves from trying again, even if it looks difficult, even if it is! difficult, do not focus on the failures but on the goal,
if calc would do the mentioned rounding to 16 decimal places correctly things would look better for me - and calc?
remember, the question was why calc does it wrong, and whether it can be done better, not whether you have advanced knowledge about ‘impossible’,
why i insist so persistently in this detail? i’m afraid that other rounding calculations (also to fewer digits?) might fail in the same manner, and that accordingly ‘round your results’ is insufficient advice for users who have problems with the insufficient precision of fp-math (and calc),
your comments sound as if such considerations are not important to you … :frowning:

Progress evolves from thinking. Only those who try again knowing what they do actually move progress; people who keep hitting their head at the wall are not what is called “progress”.

I don’t think that your raised problems are important. But - you again and again keep missing the point.

Look: I didn’t tell it’s impossible to have a better rounding; I wrote in my answer:

I don’t know if there’s a more accurate method to round to specified number of decimals. I hope there is …

if someone suggests the algorithm, it would be nice; even if someone simply shows that this is possible using doubles (by doing proper testing of Excel), it would already be a great result.

But you ignored that, and instead you chose to try to make my knowledge (about a specific thing) to look “stopping the progress”, while in fact your actions only show how little you care about knowing and understanding. Yes I know that some things are impossible; and that’s where you keep hitting the wall, instead of thinking.

i’m afraid that other rounding calculations (also to fewer digits?) might fail in the same manner, and that accordingly ‘round your results’ is insufficient advice for users who have problems with the insufficient precision of fp-math (and calc),
your comments sound as if such considerations are not important to you … :frowning:

Yes, I don’t think that “such considerations” (specifically, “i’m afraid” not based on understanding, and without a goal to understand) are important. Because they are wrong. I even have made a step-by-step description of the algorithm, and you could realize that the expected error is really high when talking about 16th significant decimal, but exponentially decreases for smaller rounding precision. And that comes from understanding and knowledge, as opposed to “considerations” based on “i’m afraid”.

@mikekaganski:

“Yes, I don’t think that “such considerations” (specifically, “i’m afraid” not based on understanding, and without a goal to understand) are important.”
yes, i expected such an answer from you (stupid insinuations from someone who is always very mimosic himself?),
did some digging (as mentioned before: ‘once you know where to search it’s easier to find …’),
imho neither '=ROUND(RAWSUBTRACT(0,3;-48*10^-17);15)' should result in ‘0,300000000000001’, nor '=ROUND(RAWSUBTRACT(0,3;-499999950*10^-17);8)' produce ‘0,30000001’ (note that i used rawsubtract to avoid double rounding, ‘+’ failed as well).
above happens despite as well the mathematical calculation should have a result of 0,300000000000000, the binary representation of 0,3 is ‘short’, and the representation of the value before rounding has a ‘4’ in the critical ‘next digit’.
rounding to fewer decimals has ‘less’ but ‘bigger’! errors, thus they ‘matter’ … imho … not for you? ok :frowning:

rounding to fewer decimals has ‘less’ but ‘bigger’! errors

Oh, lol. So you don’t understand what “error” means.

The error in both these cases is not bigger, but less. The error is not 1e-15 or 1e-8, it is by how much the calculation was mistaken *when decided if it was closer to 0 or 1 in the last significand.

And in first case (3000000000000004884981308350688777863979339599609375), it was mistaken by ~0.0115e-15; in the second case (0.300000004999999514421205049075069837272167205810546875), the error was ~0.0000000486e-8, so the relative error was each time decreasing exponentially.

And no, it doesn’t matter much. Both cases are close to both ...00 and ...01, and the difference is each time practically insignificant (and the error is each time less). Try to put a dot by pencil in the middle of a line, and then try to see which side of the line it’s closer to. The rounding should answer you that; and its error is how much is is off when deciding. No, it’s not much off. Its relative error is about 1 ULP in binary representation. That is not a bad result; but nevertheless, I suppose it might be possible to have better results - simply because it’s possible to print the binary value in decimal notation exactly, so an algorithm printing such and then analyzing the trailing digits and then converting the resulting string back to number would be able to get exactly closest representation of the correct result - but this would be too long.

Or a better definition of rounding error would be how much farther the found value is from the given value than the correct answer. This would make the absolute value of the evaluated error exactly twice as big, but the exponential decrease of relative error would not change.

So for initial number 0.3000000000000000444089209850062616169452667236328125 and precision 1e-16: the calculated value is 0.3000000000000001 (which is then represented with an error itself, but that’s unrelated), which difference from the initial value is 0.555910790149937383830547332763671875e-16; and the correct answer 0.3 is 0.444089209850062616169452667236328125e-16 away from the initial value. The absolute error is then 0.111821580299874767661094665527343750e-16, and the relative (to precision 1e-16) error is ~0.112.

Likewise, for 0.3000000000000004884981308350688777863979339599609375 and precision 1e-15, the difference from calculated 0.3000000000000001 is 0.5115018691649311222136020660400390625e-15, and from 0.3 is 0.4884981308350688777863979339599609375e-15; the absolute error is 0.0230037383298622444272041320800781250e-15; the relative (to 1e-15) error is ~0.023.

For 0.300000004999999514421205049075069837272167205810546875 and precision 1e-8, the difference from calculated 0.30000001 is 0.5000000485578794950924930162727832794189453125e-8; from proper 0.3 is 0.4999999514421205049075069837272167205810546875e-8; absolute error is 0.0000000971157589901849860325455665588378906250e-8; relative (to 1e-8) error is ~0.000000097.

@mikekaganski:
the discussions with you are always nice, and always similar,
you try to minimize something with many words and numbers, which is a big mistake.
A rounding is a decision, and has only one quality of error, namely right or wrong, so 100%. There small errors ‘explode’.
In the not too distant future, self-driving cars will autonomously decide to pass a tree on the right or on the left, imagine you survive only on one side … and the decision contains a rounding error …

OMG. And then you tell me I “insinuate”. Facepalm.

Your last comment is exactly the same reasoning as in

– What are the chances to meet a live dinosaur on street?

– Exactly 50%: either you meet one, or not.

@mikekaganski:
omg, rofl, YOU MADE MY DAY!!! thank you!
that’s new style by you, and i LIKE it, humor than insults … GREAT.
Mercedes engineers have calculated similarly, the self-control software takes people, baby carriages, old people with walking aids etc. into account, but ‘containers lying across the road’ had been just as little ‘relevant objects in the sense of the brake assistant’ as e.g. bridge piers. I mean since a container was! in the way they have improved.
Your dinosaur question hangs on the definition of dinosaur, bats are not far away, and of life, the models of Jurassic Parc can not yet reproduce, but are reasonably intelligent?

All that blablabla about lives is absolutely irrelevant, and only shows how little you know about software development, and how much your idea about the world you live in is based on illusions. But - well, that’s not rare.

One more attempt.

You are doing fundamental errors. You try to use rounding to fix imprecise FP calculations, trying to restore perfect double that is closest to exact correct decimal. This is wrong; the only way to do it right is when user rounds a result of calculations to known precision, not to maximal precision. A human can do it consciously, for specific case: e.g., calculating with money, you may safely round to 2 decimals. This would always work correctly when you work with reasonable sums, because your original numbers (with cents) will error by a very small amount (much less than a cent); their sums and multiples will also differ by values much less than a cent. In that case, round will not have to decide if it’s a bit less than 1/2 cent.

@mikekaganski:
i’m really thankful for your comments, even if they take an opponents position, i’ll think about it …
rounding to 2DP: ‘=round(0,30499999999999948;2)’ should go down, and does in Excel, but not in Calc, thus it’s a compatibility issue too.
a citation from W. Kahan, citing one of his teachers?: ‘In the 1970s D.H. Lehmer, a renowned Number Theorist at Berkeley, used to warn me …“Acquiescence to rounding errors places you in a state of sin.”’, insofar: ‘vade retro, satana!’

No it’s not a compatibility issue. As said, no single data point is useful: Excel may truncate the entered value to the 15 significant points before processing, and that doesn’t make it “correct”. The error here is again in the range of 1 ULP, and so is OK. Citations of random thoughts without context are just off-topic, and a logical fallacy of appeal to authority.

@mikekaganski: again, let me clear my thoughts: ‘Compatibility’ between different programs usable and used for the same task is the quality to process the same input to the same result. If the results are different compatibility is broken. common sense?
If the compatibility is not given and one considers which program should be changed … that which deviates more from school mathematics. my sense, would you share?
If such changes inject new deviations in other areas that can also lead to accept to be ‘better’ in the other task. If this is so for this case you are of course right, but this would have to be shown and justified.
And even then I might still consider whether there is a path to a mathematically correct solution that does not induce other problems.
Unless you know and show! that this is not possible. Or that this is not possible with a tolerable effort.
The ‘sense’ of that citations is context independent - IMHO, and it’s ‘point’ is rounding, thus on topic - IMHO