# an attempt to understand precision of calculations with floating point values, some questions occured

hi @all,

there are frequent questions in 'ask' and 'bugs' about missing accuracy in calculations, which often get the answer 'it is regarding the limitations and "granularity" of limited length floating point numbers (IEEE 754) used in calc and most other spreadsheets to represent values, and by the small errors that occur when converting from decimal to binary representation, and vice versa, since the two systems partly cover the same values, but often do not'.

in an attempt to understand and visualize which errors evolve from that i'd construct a sheet to 're-calculate' an IEEE 754 number, it's attached here: C:\fakepath\questions_about_fp_value_calculations_ori.ods

the questions are in the sheet, in short:

am i right? calc is using [edit] '16 bit' was wrong, i meant '16 decimal digit' [/edit] accuracy for integers, decimal fractions are shortened to 15 digit precision,

but there are more significant bits not evaluated by calc? question B and C in the sheet,

2^53 is rounded down by calc, regardless it has an exact IEEE 754 representaion? D in the sheet,

and a display error for one specific value, question E, i already filed a bug for that: https://bugs.documentfoundation.org/s...

my 'calculator' holds for plenty samples of values in wikipedia, but not for:

1.0000000000000002, the smallest number > 1,

subnormals - not implemented,

(Min. normal positive double) - normally it should?

(Max. Double) - normally it should?

can calc do better or are this the limitations of the sheet?

i'd appreciate if somebody can recheck the sheet, and shed some light, either tell me where i'm wrong, or confirm that there are issues regarding the accuracy produced by calc.

reg.

b.

P.S. add. sheet with irritating results: C:\fakepath\questions_about_fp_value_calculations_2_ori.ods

"": "am i right? calc is using 16 bit accuracy for integers, decimal fractions are shortened to 15 digit precision, "

What do you mean by "16 bit accuracy"? Is it a kind of typo an should be "16 decimal digits"?

Well, Calc doesn' use any specific number of decimal digits. In fact it uses some converting algorthms for both directions which I don't know in detail.

Generallythere isn't anexactdecimal representation of a IEEE 754 Doublemantissa(52 significant bits and the implicit leading 1) if not 52 decimal digits are allowed. Rounding is indispensable.Since ROUND(log(2^52);2)=15.65 it's suggesting itself to use 16 decimal digits for the display. It's reasonable to act on the fact that there is no way to know if the

`Double`

representation was exact. So, why worry?Since 15.65<16 a general conversion to 16 ...(more)

"": "can calc do better or are this the limitations of the sheet? "

Calc is using the floatingpoint capabilities of standard processors. How should there be a "limitation of the sheet" except one either being unavoidable or one being introduced by the code of "the sheet".

Where you are suspicious an introduced issue might be a bug, you will surely again report it.

In some cases you may see small discrepancies as intentionally accepted (e.g.) due to the need to give naive users the illusion their comparisons for equality make sense.

In case you want to do additional experiments in Calc:

In case you want to do additional experiments in Calc:

About 2 years ago I made a Calc document comtaining some user code which can exactly show you the

actualrepresentation of any numeric cell value in`Double`

bit by bit - and also fill a bitwise defined`Double`

into an actually usabel value. You may use it (e.g.) to perform`Double`

calculations by hand. It's surely fun!Since Basic cannot give me direct access to the bit pattern of a variable, I wrote a little program in Pascal and compiled it to a

`.dll`

which was in turn called by a function written in Basic.You can get all that if you want.

(The sheet alone cannot work, of course. You also need the

`.dll`

)BTW: There are uppercase and lowercase letters in western scripts.

It looks such a nonsense when someone tries to "reverse-engineer" LibreOffice, as if it were a closed-source software. When trying to understand it on such a low level, the only way you should use to understand what is actually going on with your calculations is using a debugger. Being open-source, LO allows it. If you don't, you will not easily find which of your

hundredsof intermediate floating-point calculations (to get a single end value that you hope to have exact binary representation) introduced an expected error, or if some end result rounding or display rounding was in effect.@Mike Kaganski: You surely understood:

When I designed what I talked of (sheet and dll) I didn't try to reverse-engineer Calc. I wanted to

seethe RAM representation of IEEE 754 'Double' numbers through an atomic microscope, and to be able tomanipulate the atomssuch numbers consist of. Calc, LibO Basic, and Pascal helped me to equip my laboratory (which you also may call my playground).@Lupp: I'm sorry for not being precise. My comment was not about your comments, but about OP question. Sorry again.

Don't worry. As I said "You surely understood."

In fact I should have addressed my previous comment to the OQer. I mainly wanted to make definitely clear to him what the offered tools were made for.

@Lupp, thks for the bit / digit hint, i edited the question acc.

if you like to share your sheet, .dll and knowledge i'd appreciate that, if you do not want to make it publicly available just send by mail, you'll find my email in 'bugs',

i'm near the limit of my skills, thus i tried to find other people to recheck and help,

@Mike, small hope you'll ever like what i do, coding and debugging is - at this time - a little above my limits, maybe later,

what i'm trying to do is: find weak points, and show them to the developers, not to blame LO or anyone, but to improve the project, imho that's neccessary, and sometimes it may help to satisfy your normal users expectations?

just an easy point, plenty of them would appreciate when a number would stay the same on formulate, save-load, and recalc. by F9, not ok for BO20, B25, B37, B40 and B56; and if 2^53-1 would be less than 2^53, less than 2^53+1 ... see: add. attachement at the end of the question, acc. to wikipedia i'd expect a granularity of 2, rather than ~10 ... and working differences and comparisions ... ???

2^53-1 is inside the range that allows contiguous representation of all integers (using 53 bits of mantissa). 2^53 itself is of course technically representable in double, but is outside the 53 bit integer range.

LibreOffice allows to show the whole 53-bit integer range with exact precision. For any integral value outside that range, LibreOffice knows that that value might be a representation of another close integer, and uses usual 15-digit display; thus 2^53-1 shows precisely as 9007199254740991, while 2^53, which should be 9007199254740992, is shown rounded to 9007199254740990 (making strange impression of being smaller). Trying to subtract 2^53-1 from 2^53 will trigger approximate equality (and thus result in 0), while 2^53-2^52-(2^53-1)+2^52 will not trigger approximate equality, and give expected 1.

All this is normal, and we need no help in knowing that. But if you find time for ...(more)