To use the least-sum-of-squared-deviations
as the criterion for optimization of approximations is not a dogm or proven the only reasonable way, but a well proven way to get usable solutions, and also the best studied way (conditions, consequences, algorithms…).
To use it the way LOGEST() does, results in replacing the differences between data and values of the approximation by their ratios (in a sense). If you see a need to use the differences explicitly again: Just try it.
To get solutions based on a different rating concerning optimization, you may experiment with the non-linear solver.
Lupp stated: “My example was made to demonstrate that any automated handling of zeros with (explicit or implicit) applications to LOGEST() must inevitably fail. The only attempt having a faint justification (kind of) is to exclude zero values from any contribution to the evaluation as Chart trendlines do.”
I think we’ve pretty well dealth with this topic. I certainly agree that any manipulation of the data by any function should be avoided unless it’s inherent in the particular function. But I’m interested to hear your mention of Chart trends ignoring zeroes; I knew they ignored blank entries, but are explicit zero values also ignored?
(continued…)
Further, the spreadsheet in question has two exponential trendlines, one describing the rise of the initial infection and one its’ decay. The values of ‘a’ & ‘b’ in the exponentials (ae^bt) as calculated by the chart agree exactly with those calculated by LOGEST(). However the chart displays the rather improbable value 1.0000 for R-squared in both cases, whereas LOGEST() returns 0.9655 and 0.8781.
Any comment?
Quoting @DavidL1: “I knew they ignored blank entries, but are explicit zero values also ignored?”
There is no specification I would know of telling me how the Chart module of LibreOffice should work in every case (and there are lots!). Thus I can only judge from my experiences, but I would not assume the X-Y-scatter-exponential-trendline-feature explicitly excludes zero values. As I interpret the observations, the used routine excludes rows producing errors from the evaluation.
Quoting @DavidL1: “Any {further} comment?”
No. But there’s a question: Did you meanwile attach your example somewhere?
My apologies Lupp… this spreadsheet has been through many revisions as various things were investigated, and the reason for the discrepancy in R-squared values came to me out of the blue a few hours ago. The bug has been fixed.
Thanks again for your help!