# What kind of standard error do Calc and Charts use for error bars?

I tried to find in the documentation what is the calculation used to ad “standard error” error bars on a chart, but could not find any explanation.

I tried to create a chart with two versions:

1. the automatic standard error error bars from the “Insert > error bars” menu
2. the same but with my own calculated values for standard error of the mean, using the formula Std err = Std dev of the sample / square root of the sample size.

Could anyone explain to me why the resulting error bars are different from one to another?

Cheers

Definition can be found in the ODF v1.2 specification, Part 1, under 20.13 chart:error-category:

The `chart:error-category` attribute is used to determine which function is used to calculate error indicators at data points. By default, no error indicators are shown. The defined values for the `chart:error-category` attribute are:

• `cell-range`: Use values from cell ranges for positive and negative error indicators. The `chart:error-lower-range` and `chart:error-upper-range` attributes determine the cell ranges which contain the values to be used for negative and positive error indicators.
• `constant`: Use fixed absolute values for both directions: positive and negative The absolute value for a positive direction is given with `chart:error-upper-limit` attribute. The absolute value for a negative direction is given with `chart:error-lower-limit` attribute.
• `error-margin`: Use a fixed percentage of the largest value – this is called error-margin. The percentage value is given within `chart:error-margin` attribute.
• `none`: No error indicators are shown.
• `percentage`: Use a fixed percentage of each value The percentage value is given within `chart:error-percentage` attribute.
• `standard-deviation`: Standard deviation of the values of a series assuming an equal distribution.
• `standard-error`: Standard error (standard deviation of the mean) of the values of a series.
• `variance`: Variance of the values of a series assuming an equal distribution.

This attribute is evaluated for chart styles that are applied to `<chart:error-indicator>`, `<chart:series>` or `<chart:plot-area>` elements.

Thank you for your response. However, Wikipedia tells me that the “standard deviation of the mean” cited in the specification is the same as the standard error of the mean.
Which means I should get the same results in the cases I described.
Any idea why I do not get the same results? I would love to know what the software is actually calculating (the actual formula).

I could be wrong, but my reading of the spec indicates that `standard-error` == standard deviation of the mean (as it states). The `standard-deviation` assumes “an equal distribution” and the error bars in the first chart in the example appear identical. In `chart2/source/tools/StatisticsHelper.cxx` the code indicates `getStandardDeviation` of `fResult = sqrt( fResult );` and a `getStandardError` of `fResult = sqrt( fVar ) / sqrt( double(nValCount) );` which would seem what Wikipedia indicates.

Thank you, Oweng. I don’t think I understand the standard error calculation: shouldn’t it be `fResult = fVar/sqrt(nValCount)` like described here?

I find it difficult to understand estimation-based statistics i.e., sample stats. When I manually calculate std. dev. of the mean for a sample I always get a much lower figure than that returned by the `STDEV` function. That bit of code carries this comment: `// standard-deviation / sqrt(n)` which seems right as `sqrt( fVar )` == the sample standard deviation i.e., σ mean == σ / sqrt(N). It looks like σ represents the corrected sample std. dev., hence the sqrt() around fVar, but my reading of the Wikipedia page (which has a lot of “citation needed” tags) suggests these are all related to population std. dev. Great Cthulhu help me! EDIT: For the eleventy-first time! Changing your manual calcs (e.g., cell H34) from `=H28/SQRT(6)` to `=SQRT(H28)/SQRT(6)` does improve the error bars, but they are still not the same as the default ones