Don't confuse external formats with FPU-internal ones.

32-bit, 64-bit and 80-bit describe the external format, how data is stored between processing by the FPU. The 80-bit format is probably the most widely used (don't pebble at if I'm wrong, thanks) because it has bee present in the x86 hardware family for ages.

In this format, 64 bits are allocated to the **mantissa** which allows ~19 decimal digit in the best cases (normalised numbers). However after any operation the least significant digits are always suspect.

When making a numerical analysis of a chain of operations, do not forget the distribution of the IEEE-754 numbers among the fractions (I consider only positive values as the set is even):

- 0
- a "huge" gap (huge compared to the distance between numbers in the next interval)
- a repetition on evenly spaced normalised numbers in a limited range followed by a gap

The ranges contain all the numbers associated with a given **exponent**. Therefore the "distance" between numbers increases with the value of the exponent. On average, the relative precision is constant on all ranges but the absolute precision degrades the further you are from zero.

The "easiest" operations are multiplication and division where the exponents are added/subtracted and you truncate or round the result mantissa dropping the excess bits.

The "worst" operations are addition and subtraction where you must scale the smaller number in magnitude so that its exponent is made the same as the larger one. This is done through shifting, losing bits.

If unnormalised numbers are operands, things get really complicated and I think this can't be taken into account in such a general app as Calc. This situation can only be mastered in dedicated particuliar cases.

Of course, FPUs often use much larger internal registers than external format to partially compensate for this. But when data leaves the FPU, it must be converted to external format and this cancels nearly in totality the compensation attempts.

Note that I am talking here about numbers already ready for calculation.

Converting from human-readable form to internal binary and back is yet another issue. It is a radix-conversion problem with its own theoretical limits. It introduces another source of accuracy loss.

A standard FPU has some internal registers of 80 bit width. This doesn't mean much to anybody not knowing in detail how it works.

hello @Lupp: :-) thus we are stuck to 64 bit (52-bitmant) doubles ... ? :-(

I don't understand.

Of course LibO uses the power of the actual processor, and currently a FPU is standard. LibO also uses OpenCL, mainly developed to add the calculating power of graphical processors otherwise underemployed while office software is running. This way eligible calculations should also be vectorized.

How to do so, and what "algorithms" may have formed the concepts behind the design of FPU and other processors only few specialist will know. Concerning the extremely important multiplication you find some information here. You don't undersatnd it? I also don't. I might understand better if I made this my main interest. After all I once (long ago) studied math... Time and mental energy are limited, however.

Now I don't even know what's the actual reason to work with 64-bit mantissae (aside of 16 bit exponnt) in FPU. Probably they use FFT? ...