calc: are decimal correct calculations possible?

hello guys, let me once again say i thank for your help!
and please stay with me a little further …

  • ‘rounding’ - i didn’t want to round 0,5 any further, i want to check if i have operands as such and then be sure that a rounding of the result doesn’t harm,
  • ‘the computing of A2’ is ‘not so bad’ (one might discuss the rounding for x5 in A1), bad is the result of the calculation in A4, ‘weitz’ can do better, ‘rawsubtract’ can do better (see A5), calc can compute 16 digit resolution outside the range -/+ 22 in A1, precision is not! lost in ‘IEEE’ or ‘doubles’ but in calcs implementation of subtraction,
  • ‘It is a matter of primality’ - won’t say it’s trivial but i know about such, about 80% of 1-decimals-digit figures lack enough '5’es in the enumerator to cancel out the '5’es in the enumerator, and already 99,84% of 4-decimals-digits, they stay ‘odd fractions’ for binary representation and become ‘endless’ …
    … continued in next comment …

continued:

  • another point is if calc actually uses my / the FPU? @ajlittoz wrote about ‘80-bit’ and i’d read it in plenty other places …
    the multiplications 0.07 times 10^17 (7000000000000001.00) and 0.07 times 10^16 (700000000000000,1250) give ‘strange’ results,
    this evolves from the rounded up representation of 0.07 with ~111101100 at the end of the mantissa (~111101011 | 10000 is cut off at the “|” and rounded up because of the following 1), so far understandable, but with a FPU that calculates with 80 bits the error should occur - if at all - much further ‘behind’ and the result should be clean in the 52nd bit … according to what weitz calculates in 128 bit mode, there ‘rounding up-1’-es occur, but only in the last of 112? bits … calc produces results as if such is affecting bit52???

@mikekaganski: you’d write ‘Your idea is not new; also it is not correct.’ … do you remember if there are any discussions about it i can find on the web?
and … for me it looks promising how many of the ‘classic fails’ could be solved that way, of course something would be good to ‘not harm’ meaningful other values, i do! have an idea
even if it was tested in the past and didn’t turn out useful … some things have changed, calc’s rounding is far from good but better than in the past, you found a better math library for ‘string to double’?, hardware performance exploded … i like to give it a new try …
perhaps - that’s a proposal - some of the ‘tricks’ implemented in the past (to cover weaknesses?) as 4 bit truncation, shrinked display for fractional values and so on, are now obsolete and c/should be undone? see e.g. calc: wrong calculation? would like a recheck,

What makes me feel that your effort is wasted is the fact that you seem to ignore that what you try to do is a thoroughly developed math science, with proofs of the theorems etc. It is explained in the well-known Wikipedia article, and you must realize that before you come with a brilliant idea, in such a practical field having vast amount of intelligence put under strictly scientific approach, you must master the science behind it. So what you do again and again is plain wrong: without mastering numerical analysis, you waste time. You resemble the inventors of perpetuum mobile, who keep sending “inventions” claiming to break thermodynamics, and the scientists who (unlike me) value their time, just have a templated response to such claims, sent without reading.

hello @mikekaganski, and excuse me if I’m stubborn (that was often necessary in history to make progress),
I read the wikipedia article, not for the first time, and it does not tell me that there is ‘proof’ that something cannot be done, but it tells me that one has to make an effort to get good results with given - limited - possibilities,
the discrepancy that computers make mistakes with simple subtractions while 50 years ago computers flew men to the moon is too blatant for me to accept it as ‘norm’,
I am missing a reference to possible earlier discussions about ‘targeted rounding’,
… continued …

… continued …
I lack an understanding why calc uses ‘untargeted rounding’ in several points (‘by scaling to 2^-48 and thus rounding off the least significant 4 bits’ - @erAck in https://ask.libreoffice. org/en/question/274247/calc-wrong-calculation-would-like-a-recheck/ and ‘By rounding to 12 significanr digits before calling the round-down of -up function, most of these unexpected roundings are eliminated’ - @Winfried1 Donkers (retired) in https://git.libreoffice.org/core/+/edcbe8c4e02a67c74ec6f85f28899431dbfa0765^!) and you persistently argue against targeted rounding,
I can prove / have shown that for [some|most] originally decimal operands better (decimal correct) results are possible (convert to decimal and round based on the decimal places of the operands, this causes only excessive effort which I would like to reduce),
… continued …

… continued …
I think you can not! prove that there is no way to achieve such better results with less effort … if you can prove it I will gladly follow it, but the wikipedia article is not enough,
I have practically proved the above claim and correctly calculated all fp-precision complaints I could trace back to cancellation, as well as a lot of random numbers, but you have so far mentioned one! special case, a special treatment of irrational values like pi(), this may be a useful thing although it exceeds the definition frame of ‘15 decimal digits’, but it could be possible? to ‘spare’ these values and to round away the occurring fp-artefacts for other values - which occur much more often - if one can distinguish the values,
… continued …

… continued …
spontaneously I would suggest / like to try: 16th significant decimal 0 → round, 16th sig. decimal != 0 → not round (better: shrink the range that xy91 to x(y+1)09 will be rounded while xy10 to xy90 won’t or similar),
more common: make a decision if to round a result acc. to it’s deviation to what the rounded value will be, this would eliminate (most?) fp-artefacts, and spare most (90 percent) irrational values, and those it would harm would be affected only in the 17th significant decimal place, and what can be extracted from the 17th place of doubles is questionable anyway,
besides the understanding that the 17th digit of pi() has 4 times better quality than the 17th digit in the range 8,0 to 9,99~,
i don’t bother that what i’m presenting up to now is ‘less or similar’ to former clue, alone i need to get it working correctly in calc to have better tools to prove my ‘better idea’ …

You just ignore everything.

One more time. I didn’t tell you that that article claims “something is impossible” (although it is impossible to “improve” a single floating-point operation accuracy; it is often possible to improve a chain of calculations). What I wrote above is that to work on that, and to understand what you are doing, you need first to learn. You do not learn. Try to read proofs of Ryū algorithm. Until you manage to provide a proof of comparable quality, you will only be an illiterate amateur. Stubbornness is not a substitute for education.

@newbie-02: your suggestion for rounding xy91 to x(y+1)09 to x(y+1)00 assumes that all “digits” in range 91-09are the result of computation errors and that the result should be exactly decimal number w(y+1)00. This is what I consider a flaw in your reasoning.

Setting apart the issue of converting between external (human-readable) and internal representation, the only way to estimate the magnitude of computation error is to conduct all your computations through “interval arithmetic” (upper and lower bounds) with specific IEEE-754 rounding on each bound. This means you have twice as many operations (does not really matter) and a very careful elaboration of your formulae.

These “precautions” are extensively covered by Numerical analysis. This science was very important in paper & pencil era. I was initially educated with slide rules (no pocket calculator then) and this made you aware of many issues.

Computers are no different though faster with more digits.

@ajlittoz: :slight_smile:
no, the suggestion tries balancing between two evils, one is to leave ‘cancellation’ 'unprocessed, the other is to damage by rounding ‘finer values’ like irrational numbers,
according to my understanding pi() has no claim to be represented better than 15 digits, but if you want to do it, or my understanding is wrong, there might be a middle way where rounding is used to remove deviations from cancellation, but only so carefully that the damage to e.g. pi() is limited to the 17th decimal place and thus becomes marginal compared to the better subtraction results … old wisdom about wars, pandemics, other dangers: the greatest evil should be fought first, the greatest danger eliminated first, the greatest success sought first,
i don’t see that respected in calcs handling of cancellation,
and don’t worry, i’m not wasting your and my time to exchange nonsense, i have a good idea in my quiver for whose validity check i need calc to get a bit closer to correct results

let me try another approach to explain …
i can’t! prove to calculate 100percent correctly, just as little as ‘you’ or calc, but i show in the sheet on the basis of 1000 random! examples with up to 15 decimal digits - and of course you can do the same for other distributions of digits on ‘integer’ and fractional part - that calc makes errors with the simple calculation ‘=A-(A-C)’ not resulting in C, and not only a few but about 715 of 1000 tries are completely wrong → result zero (imho resulting from the previously mentioned ‘untargeted roundings’), and the rest have smaller errors, none! result is correct while all! ‘my’ results are absolutely correct (columns K, L and M),
where is your motivation to accept or defend such blatant errors within the 15-digit-definition-range to avoid an error of a complex calculation that ‘overuses’ the 15-bit range,
that’s like not rescuing a child from a creek because his sweater could get damaged … ???
‘quality’ - compare col. F-G K-L,

i can’t! prove to calculate 100percent correctly

Ok. But you could try to provide some evidence to at least some of numerous claims that you do.

on the basis of 1000 random! examples

Wrong. They all use the same first part.

calc makes errors with the simple calculation ‘=A-(A-C)’

Wrong. What you calculate is not “=A-(A-C)”, but double closest to A minus double closest to result of subtraction of a double closest to C from a double closest to A. And here is utterly important to realize what @Ajlittoz told you: “once you computed A2, you can no longer compensate”. On the other hand, as I told you, it’s often possible to improve a series of calculations; Kahan would improve the “SUM(A; C; -A)”.

all! ‘my’ results are absolutely correct

Wrong. They are not correct, they just match your expectations. In fact, all they are wrong, because they give result of a different operation than subtraction of two cells.

old wisdom about wars, pandemics, other dangers: the greatest evil should be fought first …

This is the claim that Calc should only target users of calculators? and any scientist, and user who calculates financial stuff like mortgage etc. using advanced arithmetic is second-class user?

One person in one of the bugs wanted Casio precision. You do the same. The hand calculators count with some greater precision, then round to 8 digits. As @ajlittoz told you, “your suggestion for rounding … assumes that all “digits” in range 91-09are the result of computation errors”. You are not “solving some greater evils first”, you are making the tool absolutely unusable for anything other than adding/subtracting manually entered round decimal numbers, and so make it a tool for simple user only. There is no way to “solve” the other part (that you broke this way) later, other than remove your change.

The problem with your suggested fix is you decompose a high-level sequence of operations into elementary dyadic operations and want to apply “improved rounding” on each. All known algorithms tend to keep in mind the ultimate goal so that the sequence can be reordered for minimal harm. Playing with the elementary operations increases both method and truncation errors. The algorithms focus on the method part first (and then the longer the sequence of interest the better so that minimal information about it is lost). Acting on truncation only is much more difficult. From the FPU point of view, all numbers are equally likely. Your claim is based on a set of specific numbers YOU know to possess some property. The FPU does not and can’t take it into account. The only way is obviously to manually include this property into the sequence of operations, but this makes this computation unique to the set and can’t be generalized. In a way, you’re defining an algorithm over the set.

hello @mikekaganski, and thanks that you still bear with me …
Wrong. They all use the same first part.
no, they calculate six digit random integer (A), plus 9 dig random decimal fraction (B), result in (C), and work on with that somewhat random value, showing that as well col. C minus col. E is odd in calc C-(C-E)!=E, as C minus ‘dec-ULP’: C-(C-J)!=J,
double closest to A
yes, there is no other chance in calc, but - imho - calc should try to simulate the correct ‘real world’ calculation, it IS! done in other cases, see ‘by scaling to 2^-48 and thus …’,
once you computed A2, you can no longer ...
yes, for that reason a correct result in each step! is neccessary,
"SUM(A; C; -A)"
fails - in calc - to produce school math output for school math operands as well as ‘-’,
give result of a different operation
they give the result of the subtraction of the real values the cells should represent, acc. users input, that’s somehow meaningful, isn’t it?
… continued …

They all use the 1,100000001 (E25).

As I said, they use the same part. (I agree that using word “first” was misleading, since I meant that E25 was above the other numbers, but it indeed was “last” in the formula.)

And that part was carefully chosen.

I see that you consider pi as “abnormal” case. Fine then. Let me use 1/3. Is it that much abnormal?

Put =1,1+1/3*10^-8 into E25 and tell me your observations. Then let’s continue discussing your claims about your method giving better result for a general case.

I wonder how substantiated is your claim that performing calculations with numbers having significant digits in the range of 10^5 to 10^-9 (as in your summation of 285932.xxx and 1.100000001) is typical to users who only care about decimal precision, opposed to users who need most out of the number range, where most numbers are not “regular” decimals.

biggest issue first
let’s think of engineering use,
astromonic: calculate two star positions with 17 digit accuracy and then having 35% fail in their relative position by ‘not correcting cancellation’ is worse than having 15 digit accuracy for position and appr. precision in subtraction,
length calculation: subtracting two distances with a cancellation fail of 0,005% is a bigger fail than measuring it with a roller of which the circumference is miscalculated by a factor of 10^-17,
see: i don’t argue against being able to keep accuracy for special values, i’d like to try to combine,
see: i don’t argue for things which haven’t been done in calc in a much more brutal style (scale to 48 …), i argue to pull that out as it clutters precision and replace it with something meaningful, targeted rounding,

trying 1/3 - yes, you are right, now you have choosen a value which is problematic for my formula, but! that value is intentional off from the range ‘15 digit decimals’, and that’s what is representable by IEEE 754 doubles, they are not! 16-digit safe, i had a case shortly where they had even been not 15-digit safe, but that might have been a fault from calcs rounding … will recheck later,
general case - what about talking about the ‘common case’ first,
my claim and 'range' - imho it would be good if calc would handle 15 digits correct and then extend to more where possible, i’m not targetting a special use,
sparing values spare / gut … i would have tried for a long time to round one decimal place less resp. tried to find a treshhold between ‘probably fp-artifact, can be rounded away’ and ‘too big for artifact, could be meaningful content’ which at the same time fulfills ‘if a finer - irrational - value is rounded then only very little’, ‘4-bit truncation’ blocks me …

1/3 - ‘fractional math’ … we don’t have that in decimals, decimals are a subset of fractionals, we don’t have that in binaries either, they are another subset of fractionals sharing some (few!) values with decimals,
fractional math or roots different from 2 would have been more powerful / correct, but had been to expensive some time ago and now we are stuck with a wide field of installed hard- and software limited to IEEE’s …
my first claim was - would be - use better datatypes, e.g. IEEE 2008, you said ‘no! - performance!’,
improving '754’s is only my ‘plan-B’, i think i have shown that there is room for such, would like to continue, for meaningful experiments i would have liked to switch off the 4-bit truncation,
ryū - remember, the man is good, but strange why it took that long until someone came around the corner with it …