calc: are decimal correct calculations possible?

hello @mikekaganski, and thanks that you still bear with me …
Wrong. They all use the same first part.
no, they calculate six digit random integer (A), plus 9 dig random decimal fraction (B), result in (C), and work on with that somewhat random value, showing that as well col. C minus col. E is odd in calc C-(C-E)!=E, as C minus ‘dec-ULP’: C-(C-J)!=J,
double closest to A
yes, there is no other chance in calc, but - imho - calc should try to simulate the correct ‘real world’ calculation, it IS! done in other cases, see ‘by scaling to 2^-48 and thus …’,
once you computed A2, you can no longer ...
yes, for that reason a correct result in each step! is neccessary,
"SUM(A; C; -A)"
fails - in calc - to produce school math output for school math operands as well as ‘-’,
give result of a different operation
they give the result of the subtraction of the real values the cells should represent, acc. users input, that’s somehow meaningful, isn’t it?
… continued …

They all use the 1,100000001 (E25).

As I said, they use the same part. (I agree that using word “first” was misleading, since I meant that E25 was above the other numbers, but it indeed was “last” in the formula.)

And that part was carefully chosen.

I see that you consider pi as “abnormal” case. Fine then. Let me use 1/3. Is it that much abnormal?

Put =1,1+1/3*10^-8 into E25 and tell me your observations. Then let’s continue discussing your claims about your method giving better result for a general case.

I wonder how substantiated is your claim that performing calculations with numbers having significant digits in the range of 10^5 to 10^-9 (as in your summation of 285932.xxx and 1.100000001) is typical to users who only care about decimal precision, opposed to users who need most out of the number range, where most numbers are not “regular” decimals.

biggest issue first
let’s think of engineering use,
astromonic: calculate two star positions with 17 digit accuracy and then having 35% fail in their relative position by ‘not correcting cancellation’ is worse than having 15 digit accuracy for position and appr. precision in subtraction,
length calculation: subtracting two distances with a cancellation fail of 0,005% is a bigger fail than measuring it with a roller of which the circumference is miscalculated by a factor of 10^-17,
see: i don’t argue against being able to keep accuracy for special values, i’d like to try to combine,
see: i don’t argue for things which haven’t been done in calc in a much more brutal style (scale to 48 …), i argue to pull that out as it clutters precision and replace it with something meaningful, targeted rounding,

trying 1/3 - yes, you are right, now you have choosen a value which is problematic for my formula, but! that value is intentional off from the range ‘15 digit decimals’, and that’s what is representable by IEEE 754 doubles, they are not! 16-digit safe, i had a case shortly where they had even been not 15-digit safe, but that might have been a fault from calcs rounding … will recheck later,
general case - what about talking about the ‘common case’ first,
my claim and 'range' - imho it would be good if calc would handle 15 digits correct and then extend to more where possible, i’m not targetting a special use,
sparing values spare / gut … i would have tried for a long time to round one decimal place less resp. tried to find a treshhold between ‘probably fp-artifact, can be rounded away’ and ‘too big for artifact, could be meaningful content’ which at the same time fulfills ‘if a finer - irrational - value is rounded then only very little’, ‘4-bit truncation’ blocks me …

1/3 - ‘fractional math’ … we don’t have that in decimals, decimals are a subset of fractionals, we don’t have that in binaries either, they are another subset of fractionals sharing some (few!) values with decimals,
fractional math or roots different from 2 would have been more powerful / correct, but had been to expensive some time ago and now we are stuck with a wide field of installed hard- and software limited to IEEE’s …
my first claim was - would be - use better datatypes, e.g. IEEE 2008, you said ‘no! - performance!’,
improving '754’s is only my ‘plan-B’, i think i have shown that there is room for such, would like to continue, for meaningful experiments i would have liked to switch off the 4-bit truncation,
ryū - remember, the man is good, but strange why it took that long until someone came around the corner with it …

but! that value is intentional off from the range ‘15 digit decimals’

I don’t know what you want to tell by that. And I don’t know why you imagine there’s “15 digit decimal” threshold to use. We use 53 bit mantissa. I don’t care about decimal digits (that’s just a representation, and people often confuse numbers and their representations, as you constantly do). When I have 1/3, I want it to be as precise as possible. You see cases when IEEE doubles introduce problems into “decimal” numbers. Yes, base-2 numbers can only correctly represent combinations of powers of 2, while base-10 may correctly represent combinations of powers of 2 and 5. Two times more representable numbers in the whole rational number range, right? And you ignore that combinations of powers of 2 and 5 represent a tiny fraction of all possible rational numbers (there are powers of 3, 7, 11, …); and the rational numbers constitute a tiny fraction of real numbers.

You try to improve something in “school math”, limiting “school math” to decimals. 1/7 is not a school math, according to you; not something feasible in a day-to-day math. You supply a sample of calculation of 10^5 plus 10^-9, and then talk about “common case” - something you have no idea about. Do you claim you know what “common” case is? please share the stats about cases.

Two times more representable numbers

Of course I’m wrong there; not two times. The specific quotient is of course much more. Yet, this doesn’t change things. The repertoire of numbers representable by base-10 is a tiny fraction of rationals.

1/3 - ‘fractional math’ … we don’t have that in decimals, decimals are a subset of fractionals, we don’t have that in binaries either, they are another subset of fractionals sharing some (few!) values with decimals

… and what you try to do is to make closest possible imperfect representation of those much worse. So the reasonable precision of dealing with those numbers present now will be destroyed. You seem to not understand that at all.

@mikekaganski: that’s the point, i understand humans and decimals they feed into a computer as ‘the thing’, and the computer and software … as a tool to help them solving their - mostly decimal - tasks, you understand the computer and the IEEE’s as their own world, and people have to learn to live with it, we won’t get this together, i have the feeling that plenty users, some devs and most people who complain about errors in calc do it from my pov,
15 decimal digits is the claimed ‘range’ where 64-bit doubles hold a series of decimal → binary - decimal conversions, thus the range where reliable math is possible, (it varies depending relation decimal to binary range, 15 digits is lower limit),
school math with fractions is more powerful than decimal math, but most humans use decimals after school, if calc could calculate better with fractions I would use and recommend this immediately, will provide the sheet with user chooseable ranges for the operands shortly …

@ajlittoz: ‘In a way, you’re defining an algorithm over the set.’ - yes, may be that describes my understanding and approach, IEEE doubles are a subset of values, and do have a math working inside them, except for a few spinoffs at underflow or overflow, NaN’s and so on, and at cancellation, of which cancellation causes most irritations for users,
if one now takes from this set the ‘subset’ which corresponds with ‘closest representation’ to decimal numbers with 15 significant digits, one can define a mathematics over it which computes somewhat less exactly with irrational values, but within its definition range by rounding artifacts from representation inaccuracies and operations and !cancellation can largely be eliminated, (limited to a ULP of the result!?),

this i would see as the right step to make calc a tool for users to solve everyday tasks instead of a binary adventure guessing and researching why - from the user’s point of view - obviously wrong results are called correct,

@mikekaganski: would you consider re-testing your ‘Pythagoras-pi()’ sample with factors of 3*pi() in col. B, and then re-think if this special case of a calculation with an irrational value holding ‘by chance!’ is a justifying argument to block out attempts to make all! calculations with qualified 16 digit decimal values correct?

@newbie-02: would you ever consider re-thinking if it is reasonable to stop spreading lies? Your comment claims that I am blocking something. In fact, I asked you to send your changes to review in gerrit, multiple times. The net result is just blahblahblah, and silly questions like “I’m doing something in source files, I don’t show you but describe in words as if you could get something useful out of it, tell me what specifically should I change”, which I decided to ignore until you follow the advise to use gerrit to discuss stuff.

i’m in the process to review and stabilize my changes and sort out garbage, would like to present it in a proper working state to avoid such simple counterarguments as your ‘Pythagoras-pi()’ problem. in theese steps i found it failing with my patches, but failing in standard calc too (with a factor of 3*pi() ). thus if you insist your sample should hold it’s useless - at this state - to look at my work. if you re-consider your sample being allowed to catch small harming from something which improves other results it makes sense to proceed.
‘spreading lies’? - didn’t do,
‘blocking something’? - yes, you blocked my work with the ‘in sheet rounding’ and declared it nonsense based on the Pythagoras-pi() sample, pretending calc would hold for scientific calculations with irrational numbers - which is not true ‘in general’, and ignoring that standard calc already uses massive result rounding in plenty cases.

yes, you blocked my work with the ‘in sheet rounding’

You just did that again. You have asked for discussion, and I provided you with an argument. If you meant “please see how awesome I am, everyone”, and didn’t expect disagreements in answers, then it’s funny. Otherwise, any counter-argument is not a blocker, but just some data to think about, and possibly fix.

Posting unfinished stuff to gerrit is the only way to discuss code changes, e.g. asking how to best fix some code problem. It doesn’t need to be finished.

hello @mikekaganski,
if / when you answer to / about ‘the matter’ you are usually very good, when talking about the ‘how to do’ you are often too rough against others and too mimosic about yourself, and we start endless debates.
to avoid that: asking only ‘for the matter’: could a improvement be acceptable if it fails with the ‘Pythagoras-pi()’ problem as calc in standard doe’s similar just with different factors, or is it a blocker?

I suppose that it would be up to @erAck to decide, not me.

I’m reluctant to get involved in this pointless discussion!

As an engineer, I’ve long since categorized mathematical principles as belonging to the ideological ghetto (ideology) that cannot even begin to explain physical, i.e., natural events. With endless effort, attempts are made nonetheless, which are bound to fail. When approaching technology, one must employ the highest levels of mathematics: defining tolerance ranges, maintaining unknown variables, describing probabilities, investigating deviations, evaluating cluster formations as well as uniqueness. In contrast, mathematics is an attempt to simplify something that is naturally and technically and physically complex. For example, a circle is an extreme view of an ellipse at a single, infinitely precise angle. This cannot exist due to external influences and internal changes. The circle, the square, the cylinder, etc., are idealized, unnatural figures. Numbers, too, are merely an auxiliary construct to effect something that lies on a mentally delimited straight line (rules, idealisms, theories). In doing so, one leaves our real world and wanders through constructed worlds. It is a mirror labyrinth, the entrance clearly visible, the exits imagined in advance, and the paths obscured. Getting lost is tolerated. Accepting help — and seduction — is accepted. The technician narrows down an expected result, defines the tolerated range in advance, and looks for outliers and plausible cloud formations in a host of numerous tests, measurements, and experiments. This is rarely learned, since this approach comprises everything from theoretical foundations to advanced experiences and the communications of others. And it involves many mishaps. Strictly speaking, this approach exceeds the limited capabilities of any human brain, which is why any ideology easily takes root there. Like any mathematical construct/mistruct 🫣

hello @koyotak,
nice idea, however out of track - from my POV. We have math, and it’s conclusive in itself. We use math to compute / predict physics and the real world, it works quite well as long as we accounr our imprecise knowledge, our imprecise measurements.
Math requires “unlimited”, in particular unlimited fractions, in the steps to limited sizes of paper, limited denominators in decimal math, limited digits on a desk calculator … we loose the qualification for arithmetic, and need to learn to live with imprecisions / rounding in the system of math itself. It - often - works while staying with the decimal system we have been trained to in school, it becomes difficult once we try to substitute decimal fractions with approximated binary fractions, a different rounding principle … here it’s overstretching the brain of quite some people which then like to become arrogant with “read Goldberg” … whom they themself don’t understand …
Here is the main issue for calc … and other spreadsheets … they opted for binary datatypes reg. “speed speed speed” ( in a time when speed was relevant ), and neglect A.) the aftermath and harm in discussions like this, and B.) alternative datatypes which are more “human friendly”, and C.) that these datatypes are quicker in the costly operations of input / output from / to decimal strings … This can happen to people who understand that 0.2 + 0.1 != 0.3, but don’t understand why, and why this is irritating to other people, who nevertheless try to gloss over it and are only partially successful in doing so…
Building a spreadsheet can work, once we have a clear concept. Mimic Excel is clear expressed, however unclear as nobody knows about the concepts of excel. Introducing a new math, and partly covering it towards decimal doesn’t work out as we see since decades, mimic fractions could work to good degree, however we don’t have unlimited space and most people are overtaxed with them, mimicing “decimal” could work out and match peoples expectations … however calc devlopers aren’t - yet - ready or skilled to do so …

Just use a software aimed to do math:

Calc uses, what the typical processors (binary IEEE) representation of numbers supports.
.
But I don’t think it is useful to extend this long thread after 4 years, as nothing new happened (and I promise: will not happen in the next 5 years.)

“not extend” - agree, think someone else started,
“nothing will happen” - however I’m sure to continuedly be asked every day or the other to check if the reported bugs have vanished by their own …

Don’t tell me, tell others, a warning banner at start of Calc:
“There is nothing exact in floating-point calculations in Calc, …” - Mike Kaganski in “comment #5 in Calc Round Down is Rounding Up for some values”, documentfoundation.org, 2023. [Online]. Available: 154792 – Calc Round Down is Rounding Up for some values, retr: 2025-05-13
And tell them that they are qualified for single calculations, not to serve as spreadsheets ( I didn’t check all of them ).