calc: decimal places: evaluate in macro, challenging!, number or 'scientific string'

hello everyone,

i tried heavily and searched the web to no avail … :frowning:

short: doe’s someone have a macro, formula, function or similar at hand which computes the significant ‘after dot’ - decimal - digits of a value even for tricky cases where e.g. 8,1234678901234600E-004 (which is rounded for display by calc) contains some hidden value in the 16 (and 17?) digit and represents the real value of 8,123467890123456(1)E-004?

[edit-2]

i have rarely ‘stood so beautifully on the tube’ … ‘=X-ROUNDSIG(X;3)’ … is very close to what i need, already in the sheet, without macros, for small rounding arguments roundsig will hopefully work correctly, if not it will hopefully be fixed when fixing the other rounding functions,

above just needs some extra math if roundsig is rounding up, the result will have the opposite sign, but i guess i can get that ‘programmed’,

[/edit-2]

[edit]

  • deleted the macro as … ‘oh my god what kind of crap have i been working up’, and above looks better

[/edit]

long: for a little project i deal with precision of calculations in calc, and need to know how many significant decimal digits a value has, (digits carrying value trailing zeroes not counted),

normaly it’s quite easy, 1,234 has 3 decimal digits,

i’d solve that in a basic macro by evaluating the string, if there is a better way i’d be thankful,

a little more challenging: 123400,0 has -2! decimal digits - for my use case - solved by string evaluation,

next step: 1234,567E+2 is 1234,567 * 10^2, is 123456,7 and has one significant decimal digit, solved,

small step inbetween: 1234,567E-2 is 1234,567 * 10^-2, is 12,34567 and has 5 significant decimal digits, solved,

but now it comes: 1,12345678901234E+20 has !! -20 !! decimal places (comma shift) from the E+20 but 14 of them are filled with digits and thus ‘not free’, result -6 even that is computeable,

and … - following samples may be inexact - 1,12345678901234E+20 may or may not! carry some ‘hidden value’, two more digits which one can’t see but compute … i managed to get access to theese digits by ‘rawsubtract’ or simply subtracting a truncated ‘upper part’ from the value, e.g. 1,12345678901234E+20 - 1,12345678000000E+20 may result in ‘9,01234E+11’ - no ‘hidden value’, or in ‘9,012341111321E+11’ from which 901234 is the ‘initial visible part’, and ‘1111321’ is the ‘hidden value’ (of which only the first two digits are significant, the first is correct (if not messed up by calc rounding) and the second is somehow granular as not exactly computed / computeable but has valid information about the value of the number, the rest is rounding noise / white noise,

i need: 1,12345678901234(00xyz)E+20 - (00xyz) are the hidden digits / noise - to evaluate to ‘-6’ decimals, 1,12345678901234(40xyz)E+20 to evaluate to -5 decimals, and 1,12345678901234(03xyz)E+20 to ‘-4’ decimals,

similar for negative exponents, 1,12345678901234(40xyz)E-20 has 20 decimals from the ‘E-20’, plus 14 from the visible digits, plus one from the hidden value, thus 35 in total,

anybody an idea?

P.S. i’m aware that some of this value is rounded, know what an ‘ULP’ is and that ‘doubles’ are not exact but placeholders for a range of one ULP, thus imprecise up to +/- 0,5 ULP, and thus what calc computes isn’t exact, not even ‘decimal-exact’ (what itself isn’t as exact as the real world), but what i’m trying is to get the best i can get, and for that it would be nice to get access ‘behind the scenes’ not only to the prettyfied display values …

critical will be what kind of storage LO uses in memory to store data. That I don’t know, what i’ve come across elsewhere is that there exist a standard for floating point storage of numbers (IEEE-754), check this to discover how real numbers are stored in a binary way +/-x^y, +/- x and y occupy a specific sequence of bits in a 32 or 64 memory adress. IEEE-754 Floating-Point Conversion from Floating-Point to Hexadecimal

@parsley: thanks a lot … but … rounding to decimal borders can’t be done in binary representation … imho …

yes, true and by design a computer works with binary floating point data, so it forces us to convert binary precision to decimal precision.
in a calc sheet A1=0 : A2=A1+1 A3=A2+1 / B1=1 : B2=B1 * 2+1 : B3=B2 * 2+1 . Adjust both rows for numeric format. Copy formulas up to e.g. row 57.

Then at row 53 (a decimal represented number with 16 digits) the multiplication +1 is no longer exact: indicating that calc switched from exact integer representation of data, to floating point representation of data. This will also be the case if an uneven number is divided by two.

For a 64 bit binary represented variable, 52 bits are for the mantissa, 11 for the exponent and 1 for the sign of the number. By 1 bit difference? in line with the findings here.

From the calc sheet you see that above 53 bits, the decimal number has 14 digits, 13 relevant, the 14th is dangling.

@parsley: i know about theese things, and i’m just asking for a possibility to get the correct number of decimal digits, e.g. hidden in ‘=2000000000+0,00000875’, displayed as ‘2000000000,000010000’ or ‘2,00000000000001E+09’ (scientific), thus truncated and inaccirate, but containing most of the original info as you can see with ‘=[cell]-2000000000’ in another cell,
with above formula you see: ‘0,000008821’ from which ‘0,000008’ is valid exact content, ‘8’ is 'fuzzy but still significant, and ‘21’ is ‘fp-nonsense’,
at the moment i need not to access the value of theese digits, just get to know that they are there and include them in the arguments for rounding, to not to accidentally round away accuracy, while being able to round away as much fp-fractals as possible,

The determination of the uncertainty in a calculated value according to some formula with a number of arguments is the subject of numerical analysis. The uncertainty in a result is dependent upon the formula at hand http://idol.union.edu/vineyarm/teaching/phy17/uncertainties_intro.pdf,
It is inevitable that the representation of numbers has a limited amount of significant digits, for calculation of uncertainty you are allowed to take one digit extra.

How is a relative uncertainty of 10^-16 related to the real world? What i remember from school is that uncertainties are significant in measurements (10^-3 is quite good) and in the case of calculations in interative loops, where they can sum up.

@parsley: hey! that pdf looks very nice for my ideas of explaining fp-inaccuracies … :slight_smile:
i call it ‘absolute’ vs. ‘relative’ fail (or precision) of a result, but it’s the same issue to understand! - thanks a lot,
the inaccuracies of calcs (or other spreadshets) fp calculations are! important as ‘spreadsheets rule the world’, and as they are unknown to most users, and as they sometimes ‘blow up’ in ‘ill conditioned’ calculations, thus if it’s possible to overcome them that could be a ‘good thing’,
for my idea i need to know about the ‘decimal precision’ of the operands therefore the amount of decimals in a number,
!! and it’s quite hard to calculate that from ‘e-string’ notations varying between ‘sheet’ and ‘basic’ and hiding parts by rounded display!! :frowning: - see macro added to question,
(‘decimal precision’: 1,5 is not! ‘1,5’ but represents a range [1,45 … 1,54999~] and thus is! different from 1,500, thus omitted trailing ‘0’ are the next challenge … )

OK that’s a result, to comment on your original question "does someone have a macro, formula, function or similar at hand which computes the significant ‘after dot’ - decimal - digits of a value ", such a macro is hard/impossible to accomplish, because it is dependent upon the formula, so it comes down to sitting at a desk and work it out with pencil and piece of paper. eg: calculating 1/T1 - 1/T2, where T1=300 and T2=310, is common in chemical engineering and hard to calculate (in the 70’s with only logarithmic tables), but can be rewritten as (T2 - T1) / (T1 * T2). Which can be evaluated more precise.
That’s what some parts of numeric mathematics are about, especially in making stable iterations that are fast and accurate.
Fractal mathematics were discovered by the observation that stopping an iteration and starting it again yielded different results, no matter how many digits were taken into account. So some systems/calculations are intrinsically unexact/stable.