@Lupp: i was referencing your statement about the >2^{53} range: ‘However, if the semantic dyadic exponent (order of magnitude) is >52, the represented number cannot be an exact integer if not additional/absent dyadic digits (-1st, -2nd, -3rd… to say behind the zeroth) are in some way assured to be 0.’ comparing it with the cited statement about ‘single floats’,

@erAck: thanks for the explanation, i am afraid the chance that someone will change this in the foreseeable future is very small …

Quoting @newbie-02: “… i am afraid the chance that someone will change this in the foreseeable future is very small…”

Sorry, I still don’t understand what actually should be changed, and in what way.

You might provide a very **simple** sheet conatining a **few** examples of numeric cells,**accompanied** by the formulas returning results that are “bad” as you see it, and by the results you expected or would like better.

The fact that there is no means to reasonably decide if numbers >= 2^53 should be assumed to be “whole” with some dyadic zeros behind the actually stored bits you cannot change - except by providing additional information.

That numbers up to 2^53-1 are converted to decimal **as if** they are “whole numbers” also leaves the decision to the user if there actually was an integer result, or if only additional bits were shifted “out of sight” during a stepp of the calculations. (Operand underflow)

@Lupp, for the info pls. see deited question, i couldn’t attach a sheet to a comment, sorry, it become some more than ‘few’ cells, just concentrate on the red ones … thanks for your help and constructive critic …

@erAck: ‘The display of IEEE 754 double integer values (up to ±(2^(53-1))) is not rounded. Stop spreading this nonsense please.’ …

from the practice that often 2^53-1 is given as ‘the greatest integer’ i used to construct other integer values as e.g. which calc calculates to:

2^53-2 - 9.007.199.254.740.989,00

2^53-7 - 9.007.199.254.740.986,00

2^53-11 - 9.007.199.254.740.980,00

2^53-16 - 9,007,199,254,740,977.00

which is wrong enough to suspect rounding,

the wrong values have a ‘rawsubtract-difference’ of ‘0’ between display and expected, thus the value shouldn’t be wrong, but the display is not correct, may be not ‘rounded’ but ‘wrong’, it is not! exact …

sorry for misunderstanding, but I stand by my assessment: such calculations are in need of improvement

“but I stand by my assessment: such calculations are in need of improvement”

There is not anything to improve. This is the limitation of the double precision floating numbers.

The resolution of the signed and the unsigned Integers is a constant, it always equals to 1. That means:

The Integer-type number representation is capable of displaying the smallest and largest integers (depending on the bit-wide of the actual integer type: 8 bit, 16 bit, 32 bit, 64 bit…), and can display all other integers between them.

But the integer type can not represent any decimal fractions.

In contrast, the Floating-point number representation (Single, Double precision) has a dynamic resolution. It can display the numbers of lower absolute values with better resolution, and the higher absolute values with worse resolution.

For giant numbers, there may be a display problem not only for decimal fractions, but for the integer part of the number!

…continued:

This is the basic property of floating point number representation. This is an international standard, that LibreOffice developers cannot change.

```
2^53-2 - 9.007.199.254.740.989,00
2^53-7 - 9.007.199.254.740.986,00
2^53-11 - 9.007.199.254.740.980,00
2^53-16 - 9,007,199,254,740,977.00
```

Do not try to append .00 decimals and you’ll get

```
9,007,199,254,740,990
9,007,199,254,740,985
9,007,199,254,740,981
9,007,199,254,740,976
```

Not sure what can be done about those decimals cases.

Btw, I could reproduce the .00 cases only in some (not the latest) interim version of to-be 7.0.3, not in master and not 7.0.2

@erAck: funny … other errors with ‘.0’, nearly no errors with ‘.000’, errors again with ‘.0000’ … i just wanted to see whether the integers are ‘clean’ and do not carry artefacts direct behind the ‘,’ …

sure: you can do more about it than i,

repro: master win7x64 from 2020-10-15,

@Zizi64: pls. check your math and the more correct results @erAck provided, i doubt if it’s an international standard to display values off by ‘one’ when a user selects two decimal digits …

@newbie-02: Disable your *Precision as shown* setting. Somehow that interferes there. Without, the display with .00 does not change.

@erAck: … holy … big ball of mud, and thanks erAck, you are right … (and i myself wrote to others the last days to be carful with that … dangerous thing … enabled it to check a proposal, forgot it, was trapped, lost time … )

only good thing, we’ve learned that ‘precision as shown’ is buggy …

why is it always me to get caught by such exotic nonsense?

I wouldn’t even say that precision as shown is buggy. First, formatting that large numbers with decimals *makes no sense* as the double precision is integer 1 in this case; second, that number then is rounded to two decimals, which means things are shifted around (multiplied, rounded, divided) losing accuracy and then yielding a result within that magnitude’s precision. Of course we could inspect each number for magnitude if it even *can* be rounded without the result being less accurate than before, but… and then? Round less? Not round at all?

@erAck: ‘formatting that large numbers with decimals makes no sense’ - that’s correct but sounds a little like an accusation against the ‘unreasonable user’, understandable from our situation where we know what value we are playing with, but the user often does not know beforehand what the result will be, he formats ‘numeric-two decimals’ and then wonders about the result, he can calm down when he checks with rawsubtract (thanks for that tool!) that the stored value is more correct, but actually he expects a correct display of the (best possible) correct result,

‘buggy’ - if randomly scattered results of the same calculation and falsification of a (display of) a value by ‘better’! precision settings are not buggy i do not know what buggy is … mix up moon and Saturn? Donald Trump? buggy starts at least when rawsubtract see’s no difference between 2^53-16 and 9.007.199.254.740.976,00 on either setting, but toggles between ‘1’ and ‘2’ against 9.007.199.254.740.978,00 …

Please read this article carefully, then you will understand the limitations ot the Floating point numbers:

IEEE-754

“Between 2⁵²=4,503,599,627,370,496 and 2⁵³=9,007,199,254,740,992 the representable numbers are exactly the integers. For the next range, from 2⁵³ to 2⁵⁴, everything is multiplied by 2, so the representable numbers are the even ones, etc. Conversely, for the previous range from 2⁵¹ to 2⁵², the spacing is 0.5, etc.”

The resolution issue of the floating point type numbers is not a bug, but it is a limitation. It is independent from the LibreOffice.

Of course it is possible to create a “double-double precision number” type, but if it is will not be Standardized, then it will be useless for others. And a “double-double precision number” has same limitations too - at the more larger numbers.

… in short: a setting, a functionality intended to ‘round away’ small artefacts beyond a distinct precision setting may! inject a carry in the last position before, and that may! in some cases propagate into higher digits (i doubt that’s the case here as it changes with amount of zeroes and calculations composed of pure ‘small integers’ are affected too), but it should not arbitrarily confuse digits two orders of magnitude higher,

testcases: 2^53-7 fails formatted as “#.0”, this and 2^53-2, ~ -11, ~ -16 fail formatted as “#,00” on sheets with ‘precision as shown’,

too long already and OT for ‘ask’, think i’ll file a bug about it …

@Zizi64: yes … that’s all ‘old stuff’, it’s no excuse to produce results ‘one off’ injected by a formatting to better precision, neither for producing ‘non monotone’ value series at precision-range borders and similar, it’s not a reason to throw away 6% (better: 6,26959247648903%) of possible precision by ‘rounding to 15 decimal digits’, and so on,

i advocate to deliver the precision that is possible with IEEE to the user, to provide the better precision of IEEE-2008 or to find another way for ‘decimal-safe’ calculations, and for the time being to eliminate errors that - from a human point of view - currently occur in calc as far as possible,

i understand a spreadsheet as a tool that should - as far as possible - work correctly in itself so that it doesn’t cause problems even for inexperienced users

the IEEE floats are … not ‘exactly’ anything, they are placeholders for a range of values, also in the ‘integer range’ where they stand for [x-0,5 … x+0,4999…]