conversion decimal <===> dyadic ("binary") How done?

This is not about special cases concerning date-time representation, fraction formats, or the like.
It also is not about special options (like the funny Precision as shown).
The word “dyadic” is used instead of “binary number representation” because, taken literally, everything is “binary” in our context.

Calc needs to convert between decimal and dyadic representrations of numbers all the time:

  1. When a number is entered directly or imported from a textual source (dec==>dya).
  2. When a number is referenced / calculated in a cell and shall be displayed or printed (dya ==>dec).
  3. When a document gets loaded from a file (dec==>dya).
  4. When a document shall be written to a file (dya ==>dec).
  5. When formulas are evaluated using functions implying conversions (VALUE() TEXT() e.g.).
  6. Supposedly also often when COUNTIF{S}() is used implying comparisons of numbers.
  7. In many cases where an implicit conversion is applied.

By what means is this done?
To perform these specific conversions: Is there
a) core code specially developed for Calc, and and fully in the range of responsibility of Calc developers?
b) included/linked a set of routines developed by a different project? (If so, which one?)
c) something else?

Side question: Can /does Calc manipulate the processor’s FPU in a specific way?

Until before 7.0 the dec->dya rtl::math::stringToDouble() conversion was LibreOffice specific code, since 7.0 for better accuracy the strtod() function from dtoa.c of netlib/fp is used (as strtod_nolocale()) for the non-special cases like Inf or NaN.

Conversion dya->dec is done with rtl::math::doubleToString() LibreOffice specific code, still waiting for that all compilers provide C++17 std::to_chars() (and when that happens probably strtod_nolocale() will be replace by std::from_chars() as well).

Cell input is fed through the number formatter’s number recognition, which in turn uses stringToDouble() or strtod_nolocale() for raw decimal strings.

To display values the number formatter is used which in turn uses doubleToString() unless for specific formatting features.

Saving and loading documents use the same doubleToString() and stringToDouble() functions and the number formatter.

TEXT() obviously uses the number formatter.

VALUE() uses the number formatter’s value recognition same as in cell input.

COUNTIF() and other ‘=’ comparisons use LibreOffice’s rtl::math::approxEqual() that caters for some inaccuracy.

Implicit conversions from/to simple string use stringToDouble() and doubleToString()

The FPU is “manipulated” only in the way that floating point exceptions are switched off and results like Div/0, Inf and Nan are handled by error values.

Thanks a lot!

One more? Concerning COUNTIF() I would like to know for sure in what representation a “criteria” like "<" & A1 where A1 refers to a cell returning Double, is passed to the body of the routine.
Well, I should start to study the source code?
Je me sens un peu faible… Une gorgée de la potion magique?

String concatenation (as in your example) of numeric values uses the number formatter’s numeric input line representation as if a cell value was edited. The criteria parsing code then parses it back to a double value, I’d have to look it up but most likely the number formatter’s value recognition again is used as the value can be specified as a date string, for example, to match numeric date cell content.

hello @Lupp and @erAck,
‘to be discussed’: there are irritations from decimal - dyadic ‘non-congruency’, diff. precision in diff. ranges, attempts to hide errors by unspecific rounding and ‘not showing the last digit to the user’ and so on …
what about:

  • round each input to 15 sig. digits, with! showing that to the user,
  • round output to 15 sig. digits minus [dec-range operand - dec-range result], map it back to app. double,
  • don’t work with doubles which need more than 15 dec.-digits to map exactly,
  • thus restricting everything to whats possible with 15 decimal digits,
  • but be able to produce ‘decimal-safe’ arithmetik with it,
  • intern calculations may have better precision,
    imho such could:
  • produce an irritation-free spreadsheet,
  • meet user expectations for correct decimal math as all fp-artefacts would be rounded away !in a meaningful way!,
  • benefit: 1:1 relation double - decimal,
  • problem: ex$el compatibility?,

To be bold: that’s almost everything nonsense.

You can’t round decimal input to whatever number of digits if the representation is in binary floating point. Apart from integers up to 2^53-1 and exponents of 1/2 (and their distance precision gaps like 0.5 in the interval [2^51,2^52), 0.25 in [2^50,2^51) and so on) decimal numbers can not be represented exactly. All other decimal values can only be binary near values for which each theoretically a decimal representation of 17 digits exists to convert them back to the same near binary representation (which Calc currently does not with its 15 digits rounding and implementation is to be changed).

PLEASE finally understand.

What you are dreaming of is decimal floating-point, which is not and in the foreseeable future will not be implemented.

@erAck: pls. excuse my insisting,
it’s not only me who’d like ‘decimal correct math’, is it too much to ask that a Gigaflop machine and a well matured spreadsheet can calculate ‘=1,1 - 1’ correctly?
may be my suggestion is ‘nonsense’ but what calc actually delivers is worse. example?
1.) ‘=VALUE(“1234567890123444,4”)/10’ → 123456789012344,000 calc can! and does! round to 15 significant digits,
2.) ‘=VALUE(“1234567890123444”)’ → 1234567890123444,000 but sometimes you get 16 digits,
3.) ‘=VALUE(“1234567890123444,4”)*10-1,234567E+016’ → 8901234444,000 and calc has! the ‘true value’ for calculations,
4.) '=VALUE(“1234567890123444,4”)-1234567000000000 → 890123444,500 sometimes, and sometimes not or wrongly rounded,
you call it a ‘learning curve’, i’d say ‘occupational therapy’ to send users into a digital adventure game guessing ‘which result will i get today, and why!’
first step to improve: show all digits relevant for further calculation, second: correct rounding,

sorry, took some time, but think i can pinpoint my thought:
having a number, say 9,00000000000015 and it’s ‘double-float’ representation of 1.1250000000000187 * 2^3 (what could be better as 1.12500000000001875 * 2^3 with more bits) you have more than two different decimal figures pointing to (being mapped to) one float, one being the original 9,00000000000015000000, one being the re-converted float: 9,000000000000149213974509621…
to do correct math with that, e.g. rounding or decide if >= something or not, calculate the length of the string, subtract a very tiny number or whatever you can take the one or the other, and will get different results, math becomes ambiguous,
only way to avoid that: a one to one relation of the ~0150 value to the float, and avoid the other decimal values, the representation by the deviating double can’t be improved, thus it needs rounding as for display, and a clear ~0150 value taken for calculations,
still nonsense?