# calc: calculation: doe's calc use the FPU (coprocessor)?

hello @all,

think this qualifies for a separate question:

• another point is if calc actually uses my / the FPU? @ajlittoz wrote about '80-bit' and i'd read it in plenty other places ...
the multiplications 0.07 times 10^17 (7000000000000001.00) and 0.07 times 10^16 (700000000000000,1250) give 'strange' results,
this evolves from the rounded up representation of 0.07 with ~111101100 at the end of the mantissa (~111101011 | 10000 is cut off at the "|" and rounded up because of the following 1), so far understandable, but with a FPU that calculates with 80 bits the error should occur - if at all - much further 'behind' and the result should be clean in the 52nd bit ... according to what "weitz" calculates in 128 bit mode, there 'rounding up-1'-es occur, but only in the last of 112? bits ... calc produces results as if such is affecting bit52???
edit retag close merge delete

A standard FPU has some internal registers of 80 bit width. This doesn't mean much to anybody not knowing in detail how it works.

( 2021-02-03 13:28:24 +0200 )edit

hello @Lupp: :-) thus we are stuck to 64 bit (52-bitmant) doubles ... ? :-(

( 2021-02-03 13:53:58 +0200 )edit

I don't understand.
Of course LibO uses the power of the actual processor, and currently a FPU is standard. LibO also uses OpenCL, mainly developed to add the calculating power of graphical processors otherwise underemployed while office software is running. This way eligible calculations should also be vectorized.
How to do so, and what "algorithms" may have formed the concepts behind the design of FPU and other processors only few specialist will know. Concerning the extremely important multiplication you find some information here. You don't undersatnd it? I also don't. I might understand better if I made this my main interest. After all I once (long ago) studied math... Time and mental energy are limited, however.
Now I don't even know what's the actual reason to work with 64-bit mantissae (aside of 16 bit exponnt) in FPU. Probably they use FFT? ...

( 2021-02-03 14:14:56 +0200 )edit

Sort by » oldest newest most voted

Don't confuse external formats with FPU-internal ones.

32-bit, 64-bit and 80-bit describe the external format, how data is stored between processing by the FPU. The 80-bit format is probably the most widely used (don't pebble at if I'm wrong, thanks) because it has bee present in the x86 hardware family for ages.

In this format, 64 bits are allocated to the mantissa which allows ~19 decimal digit in the best cases (normalised numbers). However after any operation the least significant digits are always suspect.

When making a numerical analysis of a chain of operations, do not forget the distribution of the IEEE-754 numbers among the fractions (I consider only positive values as the set is even):

• 0
• a "huge" gap (huge compared to the distance between numbers in the next interval)
• a repetition on evenly spaced normalised numbers in a limited range followed by a gap

The ranges contain all the numbers associated with a given exponent. Therefore the "distance" between numbers increases with the value of the exponent. On average, the relative precision is constant on all ranges but the absolute precision degrades the further you are from zero.

The "easiest" operations are multiplication and division where the exponents are added/subtracted and you truncate or round the result mantissa dropping the excess bits.

The "worst" operations are addition and subtraction where you must scale the smaller number in magnitude so that its exponent is made the same as the larger one. This is done through shifting, losing bits.

If unnormalised numbers are operands, things get really complicated and I think this can't be taken into account in such a general app as Calc. This situation can only be mastered in dedicated particuliar cases.

Of course, FPUs often use much larger internal registers than external format to partially compensate for this. But when data leaves the FPU, it must be converted to external format and this cancels nearly in totality the compensation attempts.

Converting from human-readable form to internal binary and back is yet another issue. It is a radix-conversion problem with its own theoretical limits. It introduces another source of accuracy loss.

more

yes, that's about the state of my knowledge ...
@Lupp has written several times that calc puts 'everything' in doubles, and I think he would have mentioned long/extended-doubles (80 bit) if he had meant them,
i have a few 'outliers' in calculations 0,xy times 10^something, very many at times 10^15, these numbers suddenly deviate ten times more from the 'nominal value' than other similar calculations although better - exact - results could be expressed in doubles,
i'm trying to find out if this must be so or if there is a special weakness somewhere, it occurred to me that the results calculated with 80 bit could / should be more accurate,
my guess: 'calc uses the FPU, but to get homogeneous results on all platforms only with 64-bit(52mant) values, so 'doubles', but this is only a guess, I would like to know it more exactly ...
because most ...(more)

( 2021-02-03 17:27:45 +0200 )edit

@ajlittoz: revising:
- '80 bit format' - it would make sense but from statements of @Lupp and from inaccuracies affecting calc's results i'd assume calc isn't working with any 'more precise' representations than 64 bit doubles with sign, 11 bit exponent and 52 bit mantissa, perhaps a dev can provide info about that,
- '0 - a huge gap - evenly spaced' - no, the distance from 0 to the smallest representable positive value is the same as from that value to it's successor, one 'ULP' of the first range made by the smallest exponent,
- 'The "worst" operations are addition and subtraction' - no, if you understand addition as addition of two values with equal sign - opposite to different signs equaling a subtraction - then addition is relatively safe, the worst case is two 0,5 ULP deviations of a range and a 'range-1' added up and resulting in 0,75 ULP devia in ...(more)

( 2021-02-04 23:37:37 +0200 )edit

The problem concerning addition neither is the signs of the operands nor accuracy in principle. It's the fact that addition needs adjustment of operands, and therefore conflicts ( in a sense) with the FPU concept. Anyway that's an incurable disease as long as you cannot afford (among other things) availability of storage of any arbitrary size needed to keep the results - and by this mostly the operands for a next step without rounding. 1E15 + 1E-5 is exactly equal to 1E15 in current FPU arithmetic, and if you worked with 128 bit FP numbers, this would only move the limits, but not avoid them. .

( 2021-02-05 13:32:22 +0200 )edit

@Lupp: 1E15+1E-5 fails, but that fail is small as well absolute (1E-5) as relative (1E-20 of the result?), you'll hardly find a fp-addition with more than 1 ULP fail in the result,
'adjustment of the summands' is also 'nopro', the smaller one losses precision not representable in the result, but that's implicit small in relation to the result, thus neglectable,
but subtraction or uneven signs: '=9.999999999999999E-2 - 9.999999999999998E-2' -> 1.3877787807814457E-17 has a fail of 38,7% or 4503599627370496.0 ULP, relative!, absolute and if it's a summand in your calculation it's neglectable , but relative or if it's a factor it's immense!,
thus 'the sign matters',
some of such calculations (especial with operands with big imprecision 'in the last bit' (big rounding)) w/should work better with higher precision in representation and calculation (e.g. quads or 'long doubles' and 80 or ...(more)

( 2021-02-05 18:29:26 +0200 )edit

What you hint may "look bad", and you may want for what reasons ever that the effect should be hidden or circumvented in one or another way, but behind the curtain it simply is real. If a subtraction (or addition of values of different sign) is cancelling next to all of the digits (whether dyadic or decimal) lots of insignificant zeroes will be pulled in "from the right" to fill up the technical number format. The example you gave only looks strange the way you emphasize it, because you know that you meant a difference of 1E-17. The conversion to a dyadic representation never could map this, and the processor therefore cannot "know" it.
If you want "numbers" to behave the way you expect thinking decimal, you need to use software internally representing them this way, and also calculating like a schoolboy.