Don’t confuse external formats with FPU-internal ones.
32-bit, 64-bit and 80-bit describe the external format, how data is stored between processing by the FPU. The 80-bit format is probably the most widely used (don’t pebble at if I’m wrong, thanks) because it has bee present in the x86 hardware family for ages.
In this format, 64 bits are allocated to the mantissa which allows ~19 decimal digit in the best cases (normalised numbers). However after any operation the least significant digits are always suspect.
When making a numerical analysis of a chain of operations, do not forget the distribution of the IEEE-754 numbers among the fractions (I consider only positive values as the set is even):
- 0
- a “huge” gap (huge compared to the distance between numbers in the next interval)
- a repetition on evenly spaced normalised numbers in a limited range followed by a gap
The ranges contain all the numbers associated with a given exponent. Therefore the “distance” between numbers increases with the value of the exponent. On average, the relative precision is constant on all ranges but the absolute precision degrades the further you are from zero.
The “easiest” operations are multiplication and division where the exponents are added/subtracted and you truncate or round the result mantissa dropping the excess bits.
The “worst” operations are addition and subtraction where you must scale the smaller number in magnitude so that its exponent is made the same as the larger one. This is done through shifting, losing bits.
If unnormalised numbers are operands, things get really complicated and I think this can’t be taken into account in such a general app as Calc. This situation can only be mastered in dedicated particuliar cases.
Of course, FPUs often use much larger internal registers than external format to partially compensate for this. But when data leaves the FPU, it must be converted to external format and this cancels nearly in totality the compensation attempts.
Note that I am talking here about numbers already ready for calculation.
Converting from human-readable form to internal binary and back is yet another issue. It is a radix-conversion problem with its own theoretical limits. It introduces another source of accuracy loss.