fbpx
Wikipedia

Propagation of uncertainty

In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function.

The uncertainty u can be expressed in a number of ways. It may be defined by the absolute error Δx. Uncertainties can also be defined by the relative error x)/x, which is usually written as a percentage. Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval x ± u. However, the most general way of characterizing uncertainty is by specifying its probability distribution. If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation σ from the central value x, which means that the region x ± σ will cover the true value in roughly 68% of cases.

If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.[1]

In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from the Monte Carlo method family.[2] For very expansive data or complex functions, the calculation of the error propagation may be very expansive so that a surrogate model[3] or a parallel computing strategy[4][5][6] may be necessary.

In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below.

Linear combinations edit

Let   be a set of m functions, which are linear combinations of   variables   with combination coefficients  :

 
or in matrix notation,
 

Also let the variance–covariance matrix of x = (x1, ..., xn) be denoted by   and let the mean value be denoted by  :

 
  is the outer product.

Then, the variance–covariance matrix   of f is given by

 

In component notation, the equation

 
reads
 

This is the most general expression for the propagation of error from one set of variables onto another. When the errors on x are uncorrelated, the general expression simplifies to

 
where   is the variance of k-th element of the x vector. Note that even though the errors on x may be uncorrelated, the errors on f are in general correlated; in other words, even if   is a diagonal matrix,   is in general a full matrix.

The general expressions for a scalar-valued function f are a little simpler (here a is a row vector):

 
 

Each covariance term   can be expressed in terms of the correlation coefficient   by  , so that an alternative expression for the variance of f is

 

In the case that the variables in x are uncorrelated, this simplifies further to

 

In the simple case of identical coefficients and variances, we find

 

For the arithmetic mean,  , the result is the standard error of the mean:

 

Non-linear combinations edit

When f is a set of non-linear combination of the variables x, an interval propagation could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function f must usually be linearised by approximation to a first-order Taylor series expansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products.[7] The Taylor expansion would be:

 
where   denotes the partial derivative of fk with respect to the i-th variable, evaluated at the mean value of all components of vector x. Or in matrix notation,
 
where J is the Jacobian matrix. Since f0 is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, Aki and Akj by the partial derivatives,   and  . In matrix notation,[8]
 

That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument. Note this is equivalent to the matrix expression for the linear case with  .

Simplification edit

Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula:[9]

 
where   represents the standard deviation of the function  ,   represents the standard deviation of  ,   represents the standard deviation of  , and so forth.

It is important to note that this formula is based on the linear characteristics of the gradient of   and therefore it is a good estimation for the standard deviation of   as long as   are small enough. Specifically, the linear approximation of   has to be close to   inside a neighbourhood of radius  .[10]

Example edit

Any non-linear differentiable function,  , of two variables,   and  , can be expanded as

 
If we take the variance on both sides and use the formula[11] for the variance of a linear combination of variables
 
then we obtain
 
where   is the standard deviation of the function  ,   is the standard deviation of  ,   is the standard deviation of   and   is the covariance between   and  .

In the particular case that  ,  ,  . Then

 
or
 
where   is the correlation between   and  .

When the variables   and   are uncorrelated,  . Then

 

Caveats and warnings edit

Error estimates for non-linear functions are biased on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+x) increases as x increases, since the expansion to x is a good approximation only when x is near zero.

For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation;[12] see Uncertainty quantification for details.

Reciprocal and shifted reciprocal edit

In the special case of the inverse or reciprocal  , where   follows a standard normal distribution, the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance.[13]

However, in the slightly more general case of a shifted reciprocal function   for   following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole   and the mean   is real-valued.[14]

Ratios edit

Ratios are also problematic; normal approximations exist under certain conditions.

Example formulae edit

This table shows the variances and standard deviations of simple functions of the real variables   with standard deviations   covariance   and correlation   The real-valued coefficients   and   are assumed exactly known (deterministic), i.e.,  

In the right-hand columns of the table,   and   are expectation values, and   is the value of the function calculated at those values.

Function Variance Standard deviation
     
     
     
     
     
   [15][16]  
   [17]  
     
     
   [18]  
   [18]  
   [19]  
     
     
     
     
     
     

For uncorrelated variables ( ,  ) expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation, gives

 

For the case   we also have Goodman's expression[7] for the exact variance: for the uncorrelated case it is

 
and therefore we have
 

Effect of correlation on differences edit

If A and B are uncorrelated, their difference AB will have more variance than either of them. An increasing positive correlation ( ) will decrease the variance of the difference, converging to zero variance for perfectly correlated variables with the same variance. On the other hand, a negative correlation ( ) will further increase the variance of the difference, compared to the uncorrelated case.

For example, the self-subtraction f = AA has zero variance   only if the variate is perfectly autocorrelated ( ). If A is uncorrelated,   then the output variance is twice the input variance,   And if A is perfectly anticorrelated,   then the input variance is quadrupled in the output,   (notice   for f = aAaA in the table above).

Example calculations edit

Inverse tangent function edit

We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.

Define

 
where   is the absolute uncertainty on our measurement of x. The derivative of f(x) with respect to x is
 

Therefore, our propagated uncertainty is

 
where   is the absolute propagated uncertainty.

Resistance measurement edit

A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm's law, R = V / I.

Given the measured variables with uncertainties, I ± σI and V ± σV, and neglecting their possible correlation, the uncertainty in the computed quantity, σR, is:

 

See also edit

References edit

  1. ^ Kirchner, James. "Data Analysis Toolkit #5: Uncertainty Analysis and Error Propagation" (PDF). Berkeley Seismology Laboratory. University of California. Retrieved 22 April 2016.
  2. ^ Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo Methods. John Wiley & Sons.
  3. ^ Ranftl, Sascha; von der Linden, Wolfgang (2021-11-13). "Bayesian Surrogate Analysis and Uncertainty Propagation". Physical Sciences Forum. 3 (1): 6. arXiv:2101.04038. doi:10.3390/psf2021003006. ISSN 2673-9984.
  4. ^ Atanassova, E.; Gurov, T.; Karaivanova, A.; Ivanovska, S.; Durchova, M.; Dimitrov, D. (2016). "On the parallelization approaches for Intel MIC architecture". AIP Conference Proceedings. 1773 (1): 070001. Bibcode:2016AIPC.1773g0001A. doi:10.1063/1.4964983.
  5. ^ Cunha Jr, A.; Nasser, R.; Sampaio, R.; Lopes, H.; Breitman, K. (2014). "Uncertainty quantification through the Monte Carlo method in a cloud computing setting". Computer Physics Communications. 185 (5): 1355–1363. arXiv:2105.09512. Bibcode:2014CoPhC.185.1355C. doi:10.1016/j.cpc.2014.01.006. S2CID 32376269.
  6. ^ Lin, Y.; Wang, F.; Liu, B. (2018). "Random number generators for large-scale parallel Monte Carlo simulations on FPGA". Journal of Computational Physics. 360: 93–103. Bibcode:2018JCoPh.360...93L. doi:10.1016/j.jcp.2018.01.029.
  7. ^ a b Goodman, Leo (1960). "On the Exact Variance of Products". Journal of the American Statistical Association. 55 (292): 708–713. doi:10.2307/2281592. JSTOR 2281592.
  8. ^ Ochoa1, Benjamin; Belongie, Serge "Covariance Propagation for Guided Matching" 2011-07-20 at the Wayback Machine
  9. ^ Ku, H. H. (October 1966). "Notes on the use of propagation of error formulas". Journal of Research of the National Bureau of Standards. 70C (4): 262. doi:10.6028/jres.070c.025. ISSN 0022-4316. Retrieved 3 October 2012.
  10. ^ Clifford, A. A. (1973). Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems. John Wiley & Sons. ISBN 978-0470160558.[page needed]
  11. ^ Soch, Joram (2020-07-07). "Variance of the linear combination of two random variables". The Book of Statistical Proofs. Retrieved 2022-01-29.
  12. ^ Lee, S. H.; Chen, W. (2009). "A comparative study of uncertainty propagation methods for black-box-type problems". Structural and Multidisciplinary Optimization. 37 (3): 239–253. doi:10.1007/s00158-008-0234-7. S2CID 119988015.
  13. ^ Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1994). Continuous Univariate Distributions, Volume 1. Wiley. p. 171. ISBN 0-471-58495-9.
  14. ^ Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". Journal of Sound and Vibration. 332 (11): 2750–2776. doi:10.1016/j.jsv.2012.12.009.
  15. ^ (PDF). p. 2. Archived from the original (PDF) on 2016-12-13. Retrieved 2016-04-04.
  16. ^ "Propagation of Uncertainty through Mathematical Operations" (PDF). p. 5. Retrieved 2016-04-04.
  17. ^ "Strategies for Variance Estimation" (PDF). p. 37. Retrieved 2013-01-18.
  18. ^ a b Harris, Daniel C. (2003), Quantitative chemical analysis (6th ed.), Macmillan, p. 56, ISBN 978-0-7167-4464-1
  19. ^ "Error Propagation tutorial" (PDF). Foothill College. October 9, 2009. Retrieved 2012-03-01.

Further reading edit

  • Bevington, Philip R.; Robinson, D. Keith (2002), Data Reduction and Error Analysis for the Physical Sciences (3rd ed.), McGraw-Hill, ISBN 978-0-07-119926-1
  • Fornasini, Paolo (2008), The uncertainty in physical measurements: an introduction to data analysis in the physics laboratory, Springer, p. 161, ISBN 978-0-387-78649-0
  • Meyer, Stuart L. (1975), Data Analysis for Scientists and Engineers, Wiley, ISBN 978-0-471-59995-1
  • Peralta, M. (2012), Propagation Of Errors: How To Mathematically Predict Measurement Errors, CreateSpace
  • Rouaud, M. (2013), Probability, Statistics and Estimation: Propagation of Uncertainties in Experimental Measurement (PDF) (short ed.)
  • Taylor, J. R. (1997), An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (2nd ed.), University Science Books
  • Wang, C. M.; Iyer, Hari K. (2005-09-07). "On higher-order corrections for propagating uncertainties". Metrologia. 42 (5): 406–410. doi:10.1088/0026-1394/42/5/011. ISSN 0026-1394. S2CID 122841691.

External links edit

  • A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic
  • GUM, Guide to the Expression of Uncertainty in Measurement
  • EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx'
  • uncertainties package, a program/library for transparently performing calculations with uncertainties (and error correlations).
  • soerp package, a Python program/library for transparently performing *second-order* calculations with uncertainties (and error correlations).
  • Joint Committee for Guides in Metrology (2011). JCGM 102: Evaluation of Measurement Data - Supplement 2 to the "Guide to the Expression of Uncertainty in Measurement" - Extension to Any Number of Output Quantities (PDF) (Technical report). JCGM. Retrieved 13 February 2013.
  • Uncertainty Calculator Propagate uncertainty for any expression

propagation, uncertainty, propagation, uncertainty, through, time, chaos, theory, sensitivity, initial, conditions, statistics, propagation, uncertainty, propagation, error, effect, variables, uncertainties, errors, more, specifically, random, errors, uncertai. For the propagation of uncertainty through time see Chaos theory Sensitivity to initial conditions In statistics propagation of uncertainty or propagation of error is the effect of variables uncertainties or errors more specifically random errors on the uncertainty of a function based on them When the variables are the values of experimental measurements they have uncertainties due to measurement limitations e g instrument precision which propagate due to the combination of variables in the function The uncertainty u can be expressed in a number of ways It may be defined by the absolute error Dx Uncertainties can also be defined by the relative error Dx x which is usually written as a percentage Most commonly the uncertainty on a quantity is quantified in terms of the standard deviation s which is the positive square root of the variance The value of a quantity and its error are then expressed as an interval x u However the most general way of characterizing uncertainty is by specifying its probability distribution If the probability distribution of the variable is known or can be assumed in theory it is possible to get any of its statistics In particular it is possible to derive confidence limits to describe the region within which the true value of the variable may be found For example the 68 confidence limits for a one dimensional variable belonging to a normal distribution are approximately one standard deviation s from the central value x which means that the region x s will cover the true value in roughly 68 of cases If the uncertainties are correlated then covariance must be taken into account Correlation can arise from two different sources First the measurement errors may be correlated Second when the underlying values are correlated across a population the uncertainties in the group averages will be correlated 1 In a general context where a nonlinear function modifies the uncertain parameters correlated or not the standard tools to propagate uncertainty and infer resulting quantity probability distribution statistics are sampling techniques from the Monte Carlo method family 2 For very expansive data or complex functions the calculation of the error propagation may be very expansive so that a surrogate model 3 or a parallel computing strategy 4 5 6 may be necessary In some particular cases the uncertainty propagation calculation can be done through simplistic algebraic procedures Some of these scenarios are described below Contents 1 Linear combinations 2 Non linear combinations 2 1 Simplification 2 2 Example 2 3 Caveats and warnings 2 3 1 Reciprocal and shifted reciprocal 2 3 2 Ratios 3 Example formulae 3 1 Effect of correlation on differences 4 Example calculations 4 1 Inverse tangent function 4 2 Resistance measurement 5 See also 6 References 7 Further reading 8 External linksLinear combinations editLet f k x 1 x 2 x n displaystyle f k x 1 x 2 dots x n nbsp be a set of m functions which are linear combinations of n displaystyle n nbsp variables x 1 x 2 x n displaystyle x 1 x 2 dots x n nbsp with combination coefficients A k 1 A k 2 A k n k 1 m displaystyle A k1 A k2 dots A kn k 1 dots m nbsp f k i 1 n A k i x i displaystyle f k sum i 1 n A ki x i nbsp or in matrix notation f A x displaystyle mathbf f mathbf A mathbf x nbsp Also let the variance covariance matrix of x x1 xn be denoted by S x displaystyle boldsymbol Sigma x nbsp and let the mean value be denoted by m displaystyle boldsymbol mu nbsp S x E x m x m s 1 2 s 12 s 13 s 21 s 2 2 s 23 s 31 s 32 s 3 2 S 11 x S 12 x S 13 x S 21 x S 22 x S 23 x S 31 x S 32 x S 33 x displaystyle boldsymbol Sigma x E mathbf x boldsymbol mu otimes mathbf x boldsymbol mu begin pmatrix sigma 1 2 amp sigma 12 amp sigma 13 amp cdots sigma 21 amp sigma 2 2 amp sigma 23 amp cdots sigma 31 amp sigma 32 amp sigma 3 2 amp cdots vdots amp vdots amp vdots amp ddots end pmatrix begin pmatrix Sigma 11 x amp Sigma 12 x amp Sigma 13 x amp cdots Sigma 21 x amp Sigma 22 x amp Sigma 23 x amp cdots Sigma 31 x amp Sigma 32 x amp Sigma 33 x amp cdots vdots amp vdots amp vdots amp ddots end pmatrix nbsp displaystyle otimes nbsp is the outer product Then the variance covariance matrix S f displaystyle boldsymbol Sigma f nbsp of f is given byS f E f E f f E f E A x m A x m A E x m x m A T A S x A T displaystyle boldsymbol Sigma f E mathbf f E mathbf f otimes mathbf f E mathbf f E mathbf A mathbf x boldsymbol mu otimes mathbf A mathbf x boldsymbol mu mathbf A E mathbf x boldsymbol mu otimes mathbf x boldsymbol mu mathbf A mathrm T mathbf A boldsymbol Sigma x mathbf A mathrm T nbsp In component notation the equationS f A S x A T displaystyle boldsymbol Sigma f mathbf A boldsymbol Sigma x mathbf A mathrm T nbsp reads S i j f k n l n A i k S k l x A j l displaystyle Sigma ij f sum k n sum l n A ik Sigma kl x A jl nbsp This is the most general expression for the propagation of error from one set of variables onto another When the errors on x are uncorrelated the general expression simplifies toS i j f k n A i k S k x A j k displaystyle Sigma ij f sum k n A ik Sigma k x A jk nbsp where S k x s x k 2 displaystyle Sigma k x sigma x k 2 nbsp is the variance of k th element of the x vector Note that even though the errors on x may be uncorrelated the errors on f are in general correlated in other words even if S x displaystyle boldsymbol Sigma x nbsp is a diagonal matrix S f displaystyle boldsymbol Sigma f nbsp is in general a full matrix The general expressions for a scalar valued function f are a little simpler here a is a row vector f i n a i x i a x displaystyle f sum i n a i x i mathbf ax nbsp s f 2 i n j n a i S i j x a j a S x a T displaystyle sigma f 2 sum i n sum j n a i Sigma ij x a j mathbf a boldsymbol Sigma x mathbf a mathrm T nbsp Each covariance term s i j displaystyle sigma ij nbsp can be expressed in terms of the correlation coefficient r i j displaystyle rho ij nbsp by s i j r i j s i s j displaystyle sigma ij rho ij sigma i sigma j nbsp so that an alternative expression for the variance of f iss f 2 i n a i 2 s i 2 i n j j i n a i a j r i j s i s j displaystyle sigma f 2 sum i n a i 2 sigma i 2 sum i n sum j j neq i n a i a j rho ij sigma i sigma j nbsp In the case that the variables in x are uncorrelated this simplifies further tos f 2 i n a i 2 s i 2 displaystyle sigma f 2 sum i n a i 2 sigma i 2 nbsp In the simple case of identical coefficients and variances we finds f n a s displaystyle sigma f sqrt n a sigma nbsp For the arithmetic mean a 1 n displaystyle a 1 n nbsp the result is the standard error of the mean s f s n displaystyle sigma f frac sigma sqrt n nbsp Non linear combinations editSee also Taylor expansions for the moments of functions of random variables When f is a set of non linear combination of the variables x an interval propagation could be performed in order to compute intervals which contain all consistent values for the variables In a probabilistic approach the function f must usually be linearised by approximation to a first order Taylor series expansion though in some cases exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products 7 The Taylor expansion would be f k f k 0 i n f k x i x i displaystyle f k approx f k 0 sum i n frac partial f k partial x i x i nbsp where f k x i displaystyle partial f k partial x i nbsp denotes the partial derivative of fk with respect to the i th variable evaluated at the mean value of all components of vector x Or in matrix notation f f 0 J x displaystyle mathrm f approx mathrm f 0 mathrm J mathrm x nbsp where J is the Jacobian matrix Since f0 is a constant it does not contribute to the error on f Therefore the propagation of error follows the linear case above but replacing the linear coefficients Aki and Akj by the partial derivatives f k x i displaystyle frac partial f k partial x i nbsp and f k x j displaystyle frac partial f k partial x j nbsp In matrix notation 8 S f J S x J displaystyle mathrm Sigma mathrm f mathrm J mathrm Sigma mathrm x mathrm J top nbsp That is the Jacobian of the function is used to transform the rows and columns of the variance covariance matrix of the argument Note this is equivalent to the matrix expression for the linear case with J A displaystyle mathrm J A nbsp Simplification edit Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation the variance formula 9 s f f x 2 s x 2 f y 2 s y 2 f z 2 s z 2 displaystyle s f sqrt left frac partial f partial x right 2 s x 2 left frac partial f partial y right 2 s y 2 left frac partial f partial z right 2 s z 2 cdots nbsp where s f displaystyle s f nbsp represents the standard deviation of the function f displaystyle f nbsp s x displaystyle s x nbsp represents the standard deviation of x displaystyle x nbsp s y displaystyle s y nbsp represents the standard deviation of y displaystyle y nbsp and so forth It is important to note that this formula is based on the linear characteristics of the gradient of f displaystyle f nbsp and therefore it is a good estimation for the standard deviation of f displaystyle f nbsp as long as s x s y s z displaystyle s x s y s z ldots nbsp are small enough Specifically the linear approximation of f displaystyle f nbsp has to be close to f displaystyle f nbsp inside a neighbourhood of radius s x s y s z displaystyle s x s y s z ldots nbsp 10 Example edit Any non linear differentiable function f a b displaystyle f a b nbsp of two variables a displaystyle a nbsp and b displaystyle b nbsp can be expanded asf f 0 f a a f b b displaystyle f approx f 0 frac partial f partial a a frac partial f partial b b nbsp If we take the variance on both sides and use the formula 11 for the variance of a linear combination of variables Var a X b Y a 2 Var X b 2 Var Y 2 a b Cov X Y displaystyle operatorname Var aX bY a 2 operatorname Var X b 2 operatorname Var Y 2ab operatorname Cov X Y nbsp then we obtain s f 2 f a 2 s a 2 f b 2 s b 2 2 f a f b s a b displaystyle sigma f 2 approx left frac partial f partial a right 2 sigma a 2 left frac partial f partial b right 2 sigma b 2 2 frac partial f partial a frac partial f partial b sigma ab nbsp where s f displaystyle sigma f nbsp is the standard deviation of the function f displaystyle f nbsp s a displaystyle sigma a nbsp is the standard deviation of a displaystyle a nbsp s b displaystyle sigma b nbsp is the standard deviation of b displaystyle b nbsp and s a b s a s b r a b displaystyle sigma ab sigma a sigma b rho ab nbsp is the covariance between a displaystyle a nbsp and b displaystyle b nbsp In the particular case that f a b displaystyle f ab nbsp f a b displaystyle frac partial f partial a b nbsp f b a displaystyle frac partial f partial b a nbsp Thens f 2 b 2 s a 2 a 2 s b 2 2 a b s a b displaystyle sigma f 2 approx b 2 sigma a 2 a 2 sigma b 2 2ab sigma ab nbsp or s f f 2 s a a 2 s b b 2 2 s a a s b b r a b displaystyle left frac sigma f f right 2 approx left frac sigma a a right 2 left frac sigma b b right 2 2 left frac sigma a a right left frac sigma b b right rho ab nbsp where r a b displaystyle rho ab nbsp is the correlation between a displaystyle a nbsp and b displaystyle b nbsp When the variables a displaystyle a nbsp and b displaystyle b nbsp are uncorrelated r a b 0 displaystyle rho ab 0 nbsp Then s f f 2 s a a 2 s b b 2 displaystyle left frac sigma f f right 2 approx left frac sigma a a right 2 left frac sigma b b right 2 nbsp Caveats and warnings edit Error estimates for non linear functions are biased on account of using a truncated series expansion The extent of this bias depends on the nature of the function For example the bias on the error calculated for log 1 x increases as x increases since the expansion to x is a good approximation only when x is near zero For highly non linear functions there exist five categories of probabilistic approaches for uncertainty propagation 12 see Uncertainty quantification for details Reciprocal and shifted reciprocal edit Main article Reciprocal normal distribution In the special case of the inverse or reciprocal 1 B displaystyle 1 B nbsp where B N 0 1 displaystyle B N 0 1 nbsp follows a standard normal distribution the resulting distribution is a reciprocal standard normal distribution and there is no definable variance 13 However in the slightly more general case of a shifted reciprocal function 1 p B displaystyle 1 p B nbsp for B N m s displaystyle B N mu sigma nbsp following a general normal distribution then mean and variance statistics do exist in a principal value sense if the difference between the pole p displaystyle p nbsp and the mean m displaystyle mu nbsp is real valued 14 Ratios edit Main article Normal ratio distribution Ratios are also problematic normal approximations exist under certain conditions Example formulae editThis table shows the variances and standard deviations of simple functions of the real variables A B displaystyle A B nbsp with standard deviations s A s B displaystyle sigma A sigma B nbsp covariance s A B r A B s A s B displaystyle sigma AB rho AB sigma A sigma B nbsp and correlation r A B displaystyle rho AB nbsp The real valued coefficients a displaystyle a nbsp and b displaystyle b nbsp are assumed exactly known deterministic i e s a s b 0 displaystyle sigma a sigma b 0 nbsp In the right hand columns of the table A displaystyle A nbsp and B displaystyle B nbsp are expectation values and f displaystyle f nbsp is the value of the function calculated at those values Function Variance Standard deviation f a A displaystyle f aA nbsp s f 2 a 2 s A 2 displaystyle sigma f 2 a 2 sigma A 2 nbsp s f a s A displaystyle sigma f a sigma A nbsp f A B displaystyle f A B nbsp s f 2 s A 2 s B 2 2 s A B displaystyle sigma f 2 sigma A 2 sigma B 2 2 sigma AB nbsp s f s A 2 s B 2 2 s A B displaystyle sigma f sqrt sigma A 2 sigma B 2 2 sigma AB nbsp f A B displaystyle f A B nbsp s f 2 s A 2 s B 2 2 s A B displaystyle sigma f 2 sigma A 2 sigma B 2 2 sigma AB nbsp s f s A 2 s B 2 2 s A B displaystyle sigma f sqrt sigma A 2 sigma B 2 2 sigma AB nbsp f a A b B displaystyle f aA bB nbsp s f 2 a 2 s A 2 b 2 s B 2 2 a b s A B displaystyle sigma f 2 a 2 sigma A 2 b 2 sigma B 2 2ab sigma AB nbsp s f a 2 s A 2 b 2 s B 2 2 a b s A B displaystyle sigma f sqrt a 2 sigma A 2 b 2 sigma B 2 2ab sigma AB nbsp f a A b B displaystyle f aA bB nbsp s f 2 a 2 s A 2 b 2 s B 2 2 a b s A B displaystyle sigma f 2 a 2 sigma A 2 b 2 sigma B 2 2ab sigma AB nbsp s f a 2 s A 2 b 2 s B 2 2 a b s A B displaystyle sigma f sqrt a 2 sigma A 2 b 2 sigma B 2 2ab sigma AB nbsp f A B displaystyle f AB nbsp s f 2 f 2 s A A 2 s B B 2 2 s A B A B displaystyle sigma f 2 approx f 2 left left frac sigma A A right 2 left frac sigma B B right 2 2 frac sigma AB AB right nbsp 15 16 s f f s A A 2 s B B 2 2 s A B A B displaystyle sigma f approx left f right sqrt left frac sigma A A right 2 left frac sigma B B right 2 2 frac sigma AB AB nbsp f A B displaystyle f frac A B nbsp s f 2 f 2 s A A 2 s B B 2 2 s A B A B displaystyle sigma f 2 approx f 2 left left frac sigma A A right 2 left frac sigma B B right 2 2 frac sigma AB AB right nbsp 17 s f f s A A 2 s B B 2 2 s A B A B displaystyle sigma f approx left f right sqrt left frac sigma A A right 2 left frac sigma B B right 2 2 frac sigma AB AB nbsp f A A B displaystyle f frac A A B nbsp s f 2 f 2 A B 2 B 2 A 2 s A 2 s B 2 2 B A s A B displaystyle sigma f 2 approx frac f 2 left A B right 2 left frac B 2 A 2 sigma A 2 sigma B 2 2 frac B A sigma AB right nbsp s f f A B B 2 A 2 s A 2 s B 2 2 B A s A B displaystyle sigma f approx left frac f A B right sqrt frac B 2 A 2 sigma A 2 sigma B 2 2 frac B A sigma AB nbsp f a A b displaystyle f aA b nbsp s f 2 a b A b 1 s A 2 f b s A A 2 displaystyle sigma f 2 approx left a b A b 1 sigma A right 2 left frac f b sigma A A right 2 nbsp s f a b A b 1 s A f b s A A displaystyle sigma f approx left a b A b 1 sigma A right left frac f b sigma A A right nbsp f a ln b A displaystyle f a ln bA nbsp s f 2 a s A A 2 displaystyle sigma f 2 approx left a frac sigma A A right 2 nbsp 18 s f a s A A displaystyle sigma f approx left a frac sigma A A right nbsp f a log 10 b A displaystyle f a log 10 bA nbsp s f 2 a s A A ln 10 2 displaystyle sigma f 2 approx left a frac sigma A A ln 10 right 2 nbsp 18 s f a s A A ln 10 displaystyle sigma f approx left a frac sigma A A ln 10 right nbsp f a e b A displaystyle f ae bA nbsp s f 2 f 2 b s A 2 displaystyle sigma f 2 approx f 2 left b sigma A right 2 nbsp 19 s f f b s A displaystyle sigma f approx left f right left left b sigma A right right nbsp f a b A displaystyle f a bA nbsp s f 2 f 2 b ln a s A 2 displaystyle sigma f 2 approx f 2 b ln a sigma A 2 nbsp s f f b ln a s A displaystyle sigma f approx left f right left b ln a sigma A right nbsp f a sin b A displaystyle f a sin bA nbsp s f 2 a b cos b A s A 2 displaystyle sigma f 2 approx left ab cos bA sigma A right 2 nbsp s f a b cos b A s A displaystyle sigma f approx left ab cos bA sigma A right nbsp f a cos b A displaystyle f a cos left bA right nbsp s f 2 a b sin b A s A 2 displaystyle sigma f 2 approx left ab sin bA sigma A right 2 nbsp s f a b sin b A s A displaystyle sigma f approx left ab sin bA sigma A right nbsp f a tan b A displaystyle f a tan left bA right nbsp s f 2 a b sec 2 b A s A 2 displaystyle sigma f 2 approx left ab sec 2 bA sigma A right 2 nbsp s f a b sec 2 b A s A displaystyle sigma f approx left ab sec 2 bA sigma A right nbsp f A B displaystyle f A B nbsp s f 2 f 2 B A s A 2 ln A s B 2 2 B ln A A s A B displaystyle sigma f 2 approx f 2 left left frac B A sigma A right 2 left ln A sigma B right 2 2 frac B ln A A sigma AB right nbsp s f f B A s A 2 ln A s B 2 2 B ln A A s A B displaystyle sigma f approx left f right sqrt left frac B A sigma A right 2 left ln A sigma B right 2 2 frac B ln A A sigma AB nbsp f a A 2 b B 2 displaystyle f sqrt aA 2 pm bB 2 nbsp s f 2 A f 2 a 2 s A 2 B f 2 b 2 s B 2 2 a b A B f 2 s A B displaystyle sigma f 2 approx left frac A f right 2 a 2 sigma A 2 left frac B f right 2 b 2 sigma B 2 pm 2ab frac AB f 2 sigma AB nbsp s f A f 2 a 2 s A 2 B f 2 b 2 s B 2 2 a b A B f 2 s A B displaystyle sigma f approx sqrt left frac A f right 2 a 2 sigma A 2 left frac B f right 2 b 2 sigma B 2 pm 2ab frac AB f 2 sigma AB nbsp For uncorrelated variables r A B 0 displaystyle rho AB 0 nbsp s A B 0 displaystyle sigma AB 0 nbsp expressions for more complicated functions can be derived by combining simpler functions For example repeated multiplication assuming no correlation givesf A B C s f f 2 s A A 2 s B B 2 s C C 2 displaystyle f ABC qquad left frac sigma f f right 2 approx left frac sigma A A right 2 left frac sigma B B right 2 left frac sigma C C right 2 nbsp For the case f A B displaystyle f AB nbsp we also have Goodman s expression 7 for the exact variance for the uncorrelated case it isV X Y E X 2 V Y E Y 2 V X E X E X 2 Y E Y 2 displaystyle V XY E X 2 V Y E Y 2 V X E X E X 2 Y E Y 2 nbsp and therefore we have s f 2 A 2 s B 2 B 2 s A 2 s A 2 s B 2 displaystyle sigma f 2 A 2 sigma B 2 B 2 sigma A 2 sigma A 2 sigma B 2 nbsp Effect of correlation on differences edit If A and B are uncorrelated their difference A B will have more variance than either of them An increasing positive correlation r A B 1 displaystyle rho AB to 1 nbsp will decrease the variance of the difference converging to zero variance for perfectly correlated variables with the same variance On the other hand a negative correlation r A B 1 displaystyle rho AB to 1 nbsp will further increase the variance of the difference compared to the uncorrelated case For example the self subtraction f A A has zero variance s f 2 0 displaystyle sigma f 2 0 nbsp only if the variate is perfectly autocorrelated r A 1 displaystyle rho A 1 nbsp If A is uncorrelated r A 0 displaystyle rho A 0 nbsp then the output variance is twice the input variance s f 2 2 s A 2 displaystyle sigma f 2 2 sigma A 2 nbsp And if A is perfectly anticorrelated r A 1 displaystyle rho A 1 nbsp then the input variance is quadrupled in the output s f 2 4 s A 2 displaystyle sigma f 2 4 sigma A 2 nbsp notice 1 r A 2 displaystyle 1 rho A 2 nbsp for f aA aA in the table above Example calculations editInverse tangent function edit We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error Definef x arctan x displaystyle f x arctan x nbsp where D x displaystyle Delta x nbsp is the absolute uncertainty on our measurement of x The derivative of f x with respect to x is d f d x 1 1 x 2 displaystyle frac df dx frac 1 1 x 2 nbsp Therefore our propagated uncertainty isD f D x 1 x 2 displaystyle Delta f approx frac Delta x 1 x 2 nbsp where D f displaystyle Delta f nbsp is the absolute propagated uncertainty Resistance measurement edit A practical application is an experiment in which one measures current I and voltage V on a resistor in order to determine the resistance R using Ohm s law R V I Given the measured variables with uncertainties I sI and V sV and neglecting their possible correlation the uncertainty in the computed quantity sR is s R s V 2 1 I 2 s I 2 V I 2 2 R s V V 2 s I I 2 displaystyle sigma R approx sqrt sigma V 2 left frac 1 I right 2 sigma I 2 left frac V I 2 right 2 R sqrt left frac sigma V V right 2 left frac sigma I I right 2 nbsp See also editAccuracy and precision Automatic differentiation Bienayme s identity Delta method Dilution of precision navigation Errors and residuals in statistics Experimental uncertainty analysis Interval finite element Measurement uncertainty Numerical stability Probability bounds analysis Uncertainty quantification Random fuzzy variable Variance PropagationReferences edit Kirchner James Data Analysis Toolkit 5 Uncertainty Analysis and Error Propagation PDF Berkeley Seismology Laboratory University of California Retrieved 22 April 2016 Kroese D P Taimre T Botev Z I 2011 Handbook of Monte Carlo Methods John Wiley amp Sons Ranftl Sascha von der Linden Wolfgang 2021 11 13 Bayesian Surrogate Analysis and Uncertainty Propagation Physical Sciences Forum 3 1 6 arXiv 2101 04038 doi 10 3390 psf2021003006 ISSN 2673 9984 Atanassova E Gurov T Karaivanova A Ivanovska S Durchova M Dimitrov D 2016 On the parallelization approaches for Intel MIC architecture AIP Conference Proceedings 1773 1 070001 Bibcode 2016AIPC 1773g0001A doi 10 1063 1 4964983 Cunha Jr A Nasser R Sampaio R Lopes H Breitman K 2014 Uncertainty quantification through the Monte Carlo method in a cloud computing setting Computer Physics Communications 185 5 1355 1363 arXiv 2105 09512 Bibcode 2014CoPhC 185 1355C doi 10 1016 j cpc 2014 01 006 S2CID 32376269 Lin Y Wang F Liu B 2018 Random number generators for large scale parallel Monte Carlo simulations on FPGA Journal of Computational Physics 360 93 103 Bibcode 2018JCoPh 360 93L doi 10 1016 j jcp 2018 01 029 a b Goodman Leo 1960 On the Exact Variance of Products Journal of the American Statistical Association 55 292 708 713 doi 10 2307 2281592 JSTOR 2281592 Ochoa1 Benjamin Belongie Serge Covariance Propagation for Guided Matching Archived 2011 07 20 at the Wayback Machine Ku H H October 1966 Notes on the use of propagation of error formulas Journal of Research of the National Bureau of Standards 70C 4 262 doi 10 6028 jres 070c 025 ISSN 0022 4316 Retrieved 3 October 2012 Clifford A A 1973 Multivariate error analysis a handbook of error propagation and calculation in many parameter systems John Wiley amp Sons ISBN 978 0470160558 page needed Soch Joram 2020 07 07 Variance of the linear combination of two random variables The Book of Statistical Proofs Retrieved 2022 01 29 Lee S H Chen W 2009 A comparative study of uncertainty propagation methods for black box type problems Structural and Multidisciplinary Optimization 37 3 239 253 doi 10 1007 s00158 008 0234 7 S2CID 119988015 Johnson Norman L Kotz Samuel Balakrishnan Narayanaswamy 1994 Continuous Univariate Distributions Volume 1 Wiley p 171 ISBN 0 471 58495 9 Lecomte Christophe May 2013 Exact statistics of systems with uncertainties an analytical theory of rank one stochastic dynamic systems Journal of Sound and Vibration 332 11 2750 2776 doi 10 1016 j jsv 2012 12 009 A Summary of Error Propagation PDF p 2 Archived from the original PDF on 2016 12 13 Retrieved 2016 04 04 Propagation of Uncertainty through Mathematical Operations PDF p 5 Retrieved 2016 04 04 Strategies for Variance Estimation PDF p 37 Retrieved 2013 01 18 a b Harris Daniel C 2003 Quantitative chemical analysis 6th ed Macmillan p 56 ISBN 978 0 7167 4464 1 Error Propagation tutorial PDF Foothill College October 9 2009 Retrieved 2012 03 01 Further reading editBevington Philip R Robinson D Keith 2002 Data Reduction and Error Analysis for the Physical Sciences 3rd ed McGraw Hill ISBN 978 0 07 119926 1 Fornasini Paolo 2008 The uncertainty in physical measurements an introduction to data analysis in the physics laboratory Springer p 161 ISBN 978 0 387 78649 0 Meyer Stuart L 1975 Data Analysis for Scientists and Engineers Wiley ISBN 978 0 471 59995 1 Peralta M 2012 Propagation Of Errors How To Mathematically Predict Measurement Errors CreateSpace Rouaud M 2013 Probability Statistics and Estimation Propagation of Uncertainties in Experimental Measurement PDF short ed Taylor J R 1997 An Introduction to Error Analysis The Study of Uncertainties in Physical Measurements 2nd ed University Science Books Wang C M Iyer Hari K 2005 09 07 On higher order corrections for propagating uncertainties Metrologia 42 5 406 410 doi 10 1088 0026 1394 42 5 011 ISSN 0026 1394 S2CID 122841691 External links editA detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic GUM Guide to the Expression of Uncertainty in Measurement EPFL An Introduction to Error Propagation Derivation Meaning and Examples of Cy Fx Cx Fx uncertainties package a program library for transparently performing calculations with uncertainties and error correlations soerp package a Python program library for transparently performing second order calculations with uncertainties and error correlations Joint Committee for Guides in Metrology 2011 JCGM 102 Evaluation of Measurement Data Supplement 2 to the Guide to the Expression of Uncertainty in Measurement Extension to Any Number of Output Quantities PDF Technical report JCGM Retrieved 13 February 2013 Uncertainty Calculator Propagate uncertainty for any expression Retrieved from https en wikipedia org w index php title Propagation of uncertainty amp oldid 1220268027, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.