fbpx
Wikipedia

Mean squared error

In statistics, the mean squared error (MSE)[1] or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss.[2] The fact that MSE is almost always strictly positive (and not zero) is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.[3] In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk (the average loss on an observed data set), as an estimate of the true MSE (the true risk: the average loss on the actual population distribution).

The MSE is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.

The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator (how widely spread the estimates are from one data sample to another) and its bias (how far off the average estimated value is from the true value).[citation needed] For an unbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error.

Definition and basic properties edit

The MSE either assesses the quality of a predictor (i.e., a function mapping arbitrary inputs to a sample of values of some random variable), or of an estimator (i.e., a mathematical function mapping a sample of data to an estimate of a parameter of the population from which the data is sampled). In the context of prediction, understanding the prediction interval can also be useful as it provides a range within which a future observation will fall, with a certain probability. The definition of an MSE differs according to whether one is describing a predictor or an estimator.

Predictor edit

If a vector of   predictions is generated from a sample of   data points on all variables, and   is the vector of observed values of the variable being predicted, with   being the predicted values (e.g. as from a least-squares fit), then the within-sample MSE of the predictor is computed as

 

In other words, the MSE is the mean   of the squares of the errors  . This is an easily computable quantity for a particular sample (and hence is sample-dependent).

In matrix notation,

 

where   is   and   is a   column vector.

The MSE can also be computed on q data points that were not used in estimating the model, either because they were held back for this purpose, or because these data have been newly obtained. Within this process, known as cross-validation, the MSE is often called the test MSE,[4] and is computed as

 

Estimator edit

The MSE of an estimator   with respect to an unknown parameter   is defined as[1]

 

This definition depends on the unknown parameter, but the MSE is a priori a property of an estimator. The MSE could be a function of unknown parameters, in which case any estimator of the MSE based on estimates of these parameters would be a function of the data (and thus a random variable). If the estimator   is derived as a sample statistic and is used to estimate some population parameter, then the expectation is with respect to the sampling distribution of the sample statistic.

The MSE can be written as the sum of the variance of the estimator and the squared bias of the estimator, providing a useful way to calculate the MSE and implying that in the case of unbiased estimators, the MSE and variance are equivalent.[5]

 

Proof of variance and bias relationship edit

 


An even shorter proof can be achieved using the well-known formula that for a random variable  ,  . By substituting   with,  , we have

 
But in real modeling case, MSE could be described as the addition of model variance, model bias, and irreducible uncertainty (see Bias–variance tradeoff). According to the relationship, the MSE of the estimators could be simply used for the efficiency comparison, which includes the information of estimator variance and bias. This is called MSE criterion.

In regression edit

In regression analysis, plotting is a more natural way to view the overall trend of the whole data. The mean of the distance from each point to the predicted regression model can be calculated, and shown as the mean squared error. The squaring is critical to reduce the complexity with negative signs. To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method—which evaluates appropriateness of linear regression model to model bivariate dataset,[6] but whose limitation is related to known distribution of the data.

The term mean squared error is sometimes used to refer to the unbiased estimate of error variance: the residual sum of squares divided by the number of degrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (np) for p regressors or (np−1) if an intercept is used (see errors and residuals in statistics for more details).[7] Although the MSE (as defined in this article) is not an unbiased estimator of the error variance, it is consistent, given the consistency of the predictor.

In regression analysis, "mean squared error", often referred to as mean squared prediction error or "out-of-sample mean squared error", can also refer to the mean value of the squared deviations of the predictions from the true values, over an out-of-sample test space, generated by a model estimated over a particular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space.

In the context of gradient descent algorithms, it is common to introduce a factor of   to the MSE for ease of computation after taking the derivative. So a value which is technically half the mean of squared errors may be called the MSE.

Examples edit

Mean edit

Suppose we have a random sample of size   from a population,  . Suppose the sample units were chosen with replacement. That is, the   units are selected one at a time, and previously selected units are still eligible for selection for all   draws. The usual estimator for the   is the sample average

 

which has an expected value equal to the true mean   (so it is unbiased) and a mean squared error of

 

where   is the population variance.

For a Gaussian distribution, this is the best unbiased estimator (i.e., one with the lowest MSE among all unbiased estimators), but not, say, for a uniform distribution.

Variance edit

The usual estimator for the variance is the corrected sample variance:

 

This is unbiased (its expected value is  ), hence also called the unbiased sample variance, and its MSE is[8]

 

where   is the fourth central moment of the distribution or population, and   is the excess kurtosis.

However, one can use other estimators for   which are proportional to  , and an appropriate choice can always give a lower mean squared error. If we define

 

then we calculate:

 

This is minimized when

 

For a Gaussian distribution, where  , this means that the MSE is minimized when dividing the sum by  . The minimum excess kurtosis is  ,[a] which is achieved by a Bernoulli distribution with p = 1/2 (a coin flip), and the MSE is minimized for   Hence regardless of the kurtosis, we get a "better" estimate (in the sense of having a lower MSE) by scaling down the unbiased estimator a little bit; this is a simple example of a shrinkage estimator: one "shrinks" the estimator towards zero (scales down the unbiased estimator).

Further, while the corrected sample variance is the best unbiased estimator (minimum mean squared error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian, then even among unbiased estimators, the best unbiased estimator of the variance may not be  

Gaussian distribution edit

The following table gives several estimators of the true parameters of the population, μ and σ2, for the Gaussian case.[9]

True value Estimator Mean squared error
    = the unbiased estimator of the population mean,    
    = the unbiased estimator of the population variance,    
    = the biased estimator of the population variance,    
    = the biased estimator of the population variance,    

Interpretation edit

An MSE of zero, meaning that the estimator   predicts observations of the parameter   with perfect accuracy, is ideal (but typically not possible).

Values of MSE may be used for comparative purposes. Two or more statistical models may be compared using their MSEs—as a measure of how well they explain a given set of observations: An unbiased estimator (estimated from a statistical model) with the smallest variance among all unbiased estimators is the best unbiased estimator or MVUE (Minimum-Variance Unbiased Estimator).

Both analysis of variance and linear regression techniques estimate the MSE as part of the analysis and use the estimated MSE to determine the statistical significance of the factors or predictors under study. The goal of experimental design is to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects.

In one-way analysis of variance, MSE can be calculated by the division of the sum of squared errors and the degree of freedom. Also, the f-value is the ratio of the mean squared treatment and the MSE.

MSE is also used in several stepwise regression techniques as part of the determination as to how many predictors from a candidate set to include in a model for a given set of observations.

Applications edit

  • Minimizing MSE is a key criterion in selecting estimators: see minimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. However, a biased estimator may have lower MSE; see estimator bias.
  • In statistical modelling the MSE can represent the difference between the actual observations and the observation values predicted by the model. In this context, it is used to determine the extent to which the model fits the data as well as whether removing some explanatory variables is possible without significantly harming the model's predictive ability.
  • In forecasting and prediction, the Brier score is a measure of forecast skill based on MSE.

Loss function edit

Squared error loss is one of the most widely used loss functions in statistics, though its widespread use stems more from mathematical convenience than considerations of actual loss in applications. Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[3] The mathematical benefits of mean squared error are particularly evident in its use at analyzing the performance of linear regression, as it allows one to partition the variation in a dataset into variation explained by the model and variation explained by randomness.

Criticism edit

The use of mean squared error without question has been criticized by the decision theorist James Berger. Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.[10]

Like variance, mean squared error has the disadvantage of heavily weighting outliers.[11] This is a result of the squaring of each term, which effectively weights large errors more heavily than small ones. This property, undesirable in many applications, has led researchers to use alternatives such as the mean absolute error, or those based on the median.

See also edit

Notes edit

  1. ^ This can be proved by Jensen's inequality as follows. The fourth central moment is an upper bound for the square of variance, so that the least value for their ratio is one, therefore, the least value for the excess kurtosis is −2, achieved, for instance, by a Bernoulli with p=1/2.

References edit

  1. ^ a b "Mean Squared Error (MSE)". www.probabilitycourse.com. Retrieved 2020-09-12.
  2. ^ Bickel, Peter J.; Doksum, Kjell A. (2015). Mathematical Statistics: Basic Ideas and Selected Topics. Vol. I (Second ed.). p. 20. If we use quadratic loss, our risk function is called the mean squared error (MSE) ...
  3. ^ a b Lehmann, E. L.; Casella, George (1998). Theory of Point Estimation (2nd ed.). New York: Springer. ISBN 978-0-387-98502-2. MR 1639875.
  4. ^ Gareth, James; Witten, Daniela; Hastie, Trevor; Tibshirani, Rob (2021). An Introduction to Statistical Learning: with Applications in R. Springer. ISBN 978-1071614174.
  5. ^ Wackerly, Dennis; Mendenhall, William; Scheaffer, Richard L. (2008). Mathematical Statistics with Applications (7 ed.). Belmont, CA, USA: Thomson Higher Education. ISBN 978-0-495-38508-0.
  6. ^ A modern introduction to probability and statistics : understanding why and how. Dekking, Michel, 1946-. London: Springer. 2005. ISBN 978-1-85233-896-1. OCLC 262680588.{{cite book}}: CS1 maint: others (link)
  7. ^ Steel, R.G.D, and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 288.
  8. ^ Mood, A.; Graybill, F.; Boes, D. (1974). Introduction to the Theory of Statistics (3rd ed.). McGraw-Hill. p. 229.
  9. ^ DeGroot, Morris H. (1980). Probability and Statistics (2nd ed.). Addison-Wesley.
  10. ^ Berger, James O. (1985). "2.4.2 Certain Standard Loss Functions". Statistical Decision Theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. p. 60. ISBN 978-0-387-96098-2. MR 0804611.
  11. ^ Bermejo, Sergio; Cabestany, Joan (2001). "Oriented principal component analysis for large margin classifiers". Neural Networks. 14 (10): 1447–1461. doi:10.1016/S0893-6080(01)00106-X. PMID 11771723.

mean, squared, error, mean, squared, deviation, redirects, here, confused, with, mean, squared, displacement, statistics, mean, squared, error, mean, squared, deviation, estimator, procedure, estimating, unobserved, quantity, measures, average, squares, errors. Mean squared deviation redirects here Not to be confused with Mean squared displacement In statistics the mean squared error MSE 1 or mean squared deviation MSD of an estimator of a procedure for estimating an unobserved quantity measures the average of the squares of the errors that is the average squared difference between the estimated values and the actual value MSE is a risk function corresponding to the expected value of the squared error loss 2 The fact that MSE is almost always strictly positive and not zero is because of randomness or because the estimator does not account for information that could produce a more accurate estimate 3 In machine learning specifically empirical risk minimization MSE may refer to the empirical risk the average loss on an observed data set as an estimate of the true MSE the true risk the average loss on the actual population distribution The MSE is a measure of the quality of an estimator As it is derived from the square of Euclidean distance it is always a positive value that decreases as the error approaches zero The MSE is the second moment about the origin of the error and thus incorporates both the variance of the estimator how widely spread the estimates are from one data sample to another and its bias how far off the average estimated value is from the true value citation needed For an unbiased estimator the MSE is the variance of the estimator Like the variance MSE has the same units of measurement as the square of the quantity being estimated In an analogy to standard deviation taking the square root of MSE yields the root mean square error or root mean square deviation RMSE or RMSD which has the same units as the quantity being estimated for an unbiased estimator the RMSE is the square root of the variance known as the standard error Contents 1 Definition and basic properties 1 1 Predictor 1 2 Estimator 1 2 1 Proof of variance and bias relationship 2 In regression 3 Examples 3 1 Mean 3 2 Variance 3 3 Gaussian distribution 4 Interpretation 5 Applications 6 Loss function 6 1 Criticism 7 See also 8 Notes 9 ReferencesDefinition and basic properties editThe MSE either assesses the quality of a predictor i e a function mapping arbitrary inputs to a sample of values of some random variable or of an estimator i e a mathematical function mapping a sample of data to an estimate of a parameter of the population from which the data is sampled In the context of prediction understanding the prediction interval can also be useful as it provides a range within which a future observation will fall with a certain probability The definition of an MSE differs according to whether one is describing a predictor or an estimator Predictor edit If a vector of n displaystyle n nbsp predictions is generated from a sample of n displaystyle n nbsp data points on all variables and Y displaystyle Y nbsp is the vector of observed values of the variable being predicted with Y displaystyle hat Y nbsp being the predicted values e g as from a least squares fit then the within sample MSE of the predictor is computed as MSE 1 n i 1 n Y i Y i 2 displaystyle operatorname MSE frac 1 n sum i 1 n left Y i hat Y i right 2 nbsp In other words the MSE is the mean 1 n i 1 n textstyle left frac 1 n sum i 1 n right nbsp of the squares of the errors Y i Y i 2 textstyle left Y i hat Y i right 2 nbsp This is an easily computable quantity for a particular sample and hence is sample dependent In matrix notation MSE 1 n i 1 n e i 2 1 n e T e displaystyle operatorname MSE frac 1 n sum i 1 n e i 2 frac 1 n mathbf e mathsf T mathbf e nbsp where e i displaystyle e i nbsp is Y i Y i displaystyle Y i hat Y i nbsp and e displaystyle mathbf e nbsp is a n 1 displaystyle n times 1 nbsp column vector The MSE can also be computed on qdata points that were not used in estimating the model either because they were held back for this purpose or because these data have been newly obtained Within this process known as cross validation the MSE is often called the test MSE 4 and is computed as MSE 1 q i n 1 n q Y i Y i 2 displaystyle operatorname MSE frac 1 q sum i n 1 n q left Y i hat Y i right 2 nbsp Estimator edit The MSE of an estimator 8 displaystyle hat theta nbsp with respect to an unknown parameter 8 displaystyle theta nbsp is defined as 1 MSE 8 E 8 8 8 2 displaystyle operatorname MSE hat theta operatorname E theta left hat theta theta 2 right nbsp This definition depends on the unknown parameter but the MSE is a priori a property of an estimator The MSE could be a function of unknown parameters in which case any estimator of the MSE based on estimates of these parameters would be a function of the data and thus a random variable If the estimator 8 displaystyle hat theta nbsp is derived as a sample statistic and is used to estimate some population parameter then the expectation is with respect to the sampling distribution of the sample statistic The MSE can be written as the sum of the variance of the estimator and the squared bias of the estimator providing a useful way to calculate the MSE and implying that in the case of unbiased estimators the MSE and variance are equivalent 5 MSE 8 Var 8 8 Bias 8 8 2 displaystyle operatorname MSE hat theta operatorname Var theta hat theta operatorname Bias hat theta theta 2 nbsp Proof of variance and bias relationship edit MSE 8 E 8 8 8 2 E 8 8 E 8 8 E 8 8 8 2 E 8 8 E 8 8 2 2 8 E 8 8 E 8 8 8 E 8 8 8 2 E 8 8 E 8 8 2 E 8 2 8 E 8 8 E 8 8 8 E 8 E 8 8 8 2 E 8 8 E 8 8 2 2 E 8 8 8 E 8 8 E 8 8 E 8 8 8 2 E 8 8 8 const E 8 8 E 8 8 2 2 E 8 8 8 E 8 8 E 8 8 E 8 8 8 2 E 8 8 const E 8 8 E 8 8 2 E 8 8 8 2 Var 8 8 Bias 8 8 8 2 displaystyle begin aligned operatorname MSE hat theta amp operatorname E theta left hat theta theta 2 right amp operatorname E theta left left hat theta operatorname E theta hat theta operatorname E theta hat theta theta right 2 right amp operatorname E theta left left hat theta operatorname E theta hat theta right 2 2 left hat theta operatorname E theta hat theta right left operatorname E theta hat theta theta right left operatorname E theta hat theta theta right 2 right amp operatorname E theta left left hat theta operatorname E theta hat theta right 2 right operatorname E theta left 2 left hat theta operatorname E theta hat theta right left operatorname E theta hat theta theta right right operatorname E theta left left operatorname E theta hat theta theta right 2 right amp operatorname E theta left left hat theta operatorname E theta hat theta right 2 right 2 left operatorname E theta hat theta theta right operatorname E theta left hat theta operatorname E theta hat theta right left operatorname E theta hat theta theta right 2 amp amp operatorname E theta hat theta theta text const amp operatorname E theta left left hat theta operatorname E theta hat theta right 2 right 2 left operatorname E theta hat theta theta right left operatorname E theta hat theta operatorname E theta hat theta right left operatorname E theta hat theta theta right 2 amp amp operatorname E theta hat theta text const amp operatorname E theta left left hat theta operatorname E theta hat theta right 2 right left operatorname E theta hat theta theta right 2 amp operatorname Var theta hat theta operatorname Bias theta hat theta theta 2 end aligned nbsp An even shorter proof can be achieved using the well known formula that for a random variable X textstyle X nbsp E X 2 Var X E X 2 textstyle mathbb E X 2 operatorname Var X mathbb E X 2 nbsp By substituting X textstyle X nbsp with 8 8 textstyle hat theta theta nbsp we haveMSE 8 E 8 8 2 Var 8 8 E 8 8 2 Var 8 Bias 2 8 8 displaystyle begin aligned operatorname MSE hat theta amp mathbb E hat theta theta 2 amp operatorname Var hat theta theta mathbb E hat theta theta 2 amp operatorname Var hat theta operatorname Bias 2 hat theta theta end aligned nbsp But in real modeling case MSE could be described as the addition of model variance model bias and irreducible uncertainty see Bias variance tradeoff According to the relationship the MSE of the estimators could be simply used for the efficiency comparison which includes the information of estimator variance and bias This is called MSE criterion In regression editFurther information Reduced chi squared statistic In regression analysis plotting is a more natural way to view the overall trend of the whole data The mean of the distance from each point to the predicted regression model can be calculated and shown as the mean squared error The squaring is critical to reduce the complexity with negative signs To minimize MSE the model could be more accurate which would mean the model is closer to actual data One example of a linear regression using this method is the least squares method which evaluates appropriateness of linear regression model to model bivariate dataset 6 but whose limitation is related to known distribution of the data The term mean squared error is sometimes used to refer to the unbiased estimate of error variance the residual sum of squares divided by the number of degrees of freedom This definition for a known computed quantity differs from the above definition for the computed MSE of a predictor in that a different denominator is used The denominator is the sample size reduced by the number of model parameters estimated from the same data n p for p regressors or n p 1 if an intercept is used see errors and residuals in statistics for more details 7 Although the MSE as defined in this article is not an unbiased estimator of the error variance it is consistent given the consistency of the predictor In regression analysis mean squared error often referred to as mean squared prediction error or out of sample mean squared error can also refer to the mean value of the squared deviations of the predictions from the true values over an out of sample test space generated by a model estimated over a particular sample space This also is a known computed quantity and it varies by sample and by out of sample test space In the context of gradient descent algorithms it is common to introduce a factor of 1 2 displaystyle 1 2 nbsp to the MSE for ease of computation after taking the derivative So a value which is technically half the mean of squared errors may be called the MSE Examples editMean edit Suppose we have a random sample of size n displaystyle n nbsp from a population X 1 X n displaystyle X 1 dots X n nbsp Suppose the sample units were chosen with replacement That is the n displaystyle n nbsp units are selected one at a time and previously selected units are still eligible for selection for all n displaystyle n nbsp draws The usual estimator for the m displaystyle mu nbsp is the sample average X 1 n i 1 n X i displaystyle overline X frac 1 n sum i 1 n X i nbsp which has an expected value equal to the true mean m displaystyle mu nbsp so it is unbiased and a mean squared error of MSE X E X m 2 s n 2 s 2 n displaystyle operatorname MSE left overline X right operatorname E left left overline X mu right 2 right left frac sigma sqrt n right 2 frac sigma 2 n nbsp where s 2 displaystyle sigma 2 nbsp is the population variance For a Gaussian distribution this is the best unbiased estimator i e one with the lowest MSE among all unbiased estimators but not say for a uniform distribution Variance edit Further information Sample variance The usual estimator for the variance is the corrected sample variance S n 1 2 1 n 1 i 1 n X i X 2 1 n 1 i 1 n X i 2 n X 2 displaystyle S n 1 2 frac 1 n 1 sum i 1 n left X i overline X right 2 frac 1 n 1 left sum i 1 n X i 2 n overline X 2 right nbsp This is unbiased its expected value is s 2 displaystyle sigma 2 nbsp hence also called the unbiased sample variance and its MSE is 8 MSE S n 1 2 1 n m 4 n 3 n 1 s 4 1 n g 2 2 n n 1 s 4 displaystyle operatorname MSE S n 1 2 frac 1 n left mu 4 frac n 3 n 1 sigma 4 right frac 1 n left gamma 2 frac 2n n 1 right sigma 4 nbsp where m 4 displaystyle mu 4 nbsp is the fourth central moment of the distribution or population and g 2 m 4 s 4 3 displaystyle gamma 2 mu 4 sigma 4 3 nbsp is the excess kurtosis However one can use other estimators for s 2 displaystyle sigma 2 nbsp which are proportional to S n 1 2 displaystyle S n 1 2 nbsp and an appropriate choice can always give a lower mean squared error If we define S a 2 n 1 a S n 1 2 1 a i 1 n X i X 2 displaystyle S a 2 frac n 1 a S n 1 2 frac 1 a sum i 1 n left X i overline X right 2 nbsp then we calculate MSE S a 2 E n 1 a S n 1 2 s 2 2 E n 1 2 a 2 S n 1 4 2 n 1 a S n 1 2 s 2 s 4 n 1 2 a 2 E S n 1 4 2 n 1 a E S n 1 2 s 2 s 4 n 1 2 a 2 E S n 1 4 2 n 1 a s 4 s 4 E S n 1 2 s 2 n 1 2 a 2 g 2 n n 1 n 1 s 4 2 n 1 a s 4 s 4 E S n 1 4 MSE S n 1 2 s 4 n 1 n a 2 n 1 g 2 n 2 n s 4 2 n 1 a s 4 s 4 displaystyle begin aligned operatorname MSE S a 2 amp operatorname E left left frac n 1 a S n 1 2 sigma 2 right 2 right amp operatorname E left frac n 1 2 a 2 S n 1 4 2 left frac n 1 a S n 1 2 right sigma 2 sigma 4 right amp frac n 1 2 a 2 operatorname E left S n 1 4 right 2 left frac n 1 a right operatorname E left S n 1 2 right sigma 2 sigma 4 amp frac n 1 2 a 2 operatorname E left S n 1 4 right 2 left frac n 1 a right sigma 4 sigma 4 amp amp operatorname E left S n 1 2 right sigma 2 amp frac n 1 2 a 2 left frac gamma 2 n frac n 1 n 1 right sigma 4 2 left frac n 1 a right sigma 4 sigma 4 amp amp operatorname E left S n 1 4 right operatorname MSE S n 1 2 sigma 4 amp frac n 1 na 2 left n 1 gamma 2 n 2 n right sigma 4 2 left frac n 1 a right sigma 4 sigma 4 end aligned nbsp This is minimized when a n 1 g 2 n 2 n n n 1 n 1 n g 2 displaystyle a frac n 1 gamma 2 n 2 n n n 1 frac n 1 n gamma 2 nbsp For a Gaussian distribution where g 2 0 displaystyle gamma 2 0 nbsp this means that the MSE is minimized when dividing the sum by a n 1 displaystyle a n 1 nbsp The minimum excess kurtosis is g 2 2 displaystyle gamma 2 2 nbsp a which is achieved by a Bernoulli distribution with p 1 2 a coin flip and the MSE is minimized for a n 1 2 n displaystyle a n 1 tfrac 2 n nbsp Hence regardless of the kurtosis we get a better estimate in the sense of having a lower MSE by scaling down the unbiased estimator a little bit this is a simple example of a shrinkage estimator one shrinks the estimator towards zero scales down the unbiased estimator Further while the corrected sample variance is the best unbiased estimator minimum mean squared error among unbiased estimators of variance for Gaussian distributions if the distribution is not Gaussian then even among unbiased estimators the best unbiased estimator of the variance may not be S n 1 2 displaystyle S n 1 2 nbsp Gaussian distribution edit The following table gives several estimators of the true parameters of the population m and s2 for the Gaussian case 9 True value Estimator Mean squared error 8 m displaystyle theta mu nbsp 8 displaystyle hat theta nbsp the unbiased estimator of the population mean X 1 n i 1 n X i displaystyle overline X frac 1 n sum i 1 n X i nbsp MSE X E X m 2 s n 2 displaystyle operatorname MSE overline X operatorname E overline X mu 2 left frac sigma sqrt n right 2 nbsp 8 s 2 displaystyle theta sigma 2 nbsp 8 displaystyle hat theta nbsp the unbiased estimator of the population variance S n 1 2 1 n 1 i 1 n X i X 2 displaystyle S n 1 2 frac 1 n 1 sum i 1 n left X i overline X right 2 nbsp MSE S n 1 2 E S n 1 2 s 2 2 2 n 1 s 4 displaystyle operatorname MSE S n 1 2 operatorname E S n 1 2 sigma 2 2 frac 2 n 1 sigma 4 nbsp 8 s 2 displaystyle theta sigma 2 nbsp 8 displaystyle hat theta nbsp the biased estimator of the population variance S n 2 1 n i 1 n X i X 2 displaystyle S n 2 frac 1 n sum i 1 n left X i overline X right 2 nbsp MSE S n 2 E S n 2 s 2 2 2 n 1 n 2 s 4 displaystyle operatorname MSE S n 2 operatorname E S n 2 sigma 2 2 frac 2n 1 n 2 sigma 4 nbsp 8 s 2 displaystyle theta sigma 2 nbsp 8 displaystyle hat theta nbsp the biased estimator of the population variance S n 1 2 1 n 1 i 1 n X i X 2 displaystyle S n 1 2 frac 1 n 1 sum i 1 n left X i overline X right 2 nbsp MSE S n 1 2 E S n 1 2 s 2 2 2 n 1 s 4 displaystyle operatorname MSE S n 1 2 operatorname E S n 1 2 sigma 2 2 frac 2 n 1 sigma 4 nbsp Interpretation editAn MSE of zero meaning that the estimator 8 displaystyle hat theta nbsp predicts observations of the parameter 8 displaystyle theta nbsp with perfect accuracy is ideal but typically not possible Values of MSE may be used for comparative purposes Two or more statistical models may be compared using their MSEs as a measure of how well they explain a given set of observations An unbiased estimator estimated from a statistical model with the smallest variance among all unbiased estimators is the best unbiased estimator or MVUE Minimum Variance Unbiased Estimator Both analysis of variance and linear regression techniques estimate the MSE as part of the analysis and use the estimated MSE to determine the statistical significance of the factors or predictors under study The goal of experimental design is to construct experiments in such a way that when the observations are analyzed the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects In one way analysis of variance MSE can be calculated by the division of the sum of squared errors and the degree of freedom Also the f value is the ratio of the mean squared treatment and the MSE MSE is also used in several stepwise regression techniques as part of the determination as to how many predictors from a candidate set to include in a model for a given set of observations Applications editThis article is in list format but may read better as prose You can help by converting this article if appropriate Editing help is available April 2021 Minimizing MSE is a key criterion in selecting estimators see minimum mean square error Among unbiased estimators minimizing the MSE is equivalent to minimizing the variance and the estimator that does this is the minimum variance unbiased estimator However a biased estimator may have lower MSE see estimator bias In statistical modelling the MSE can represent the difference between the actual observations and the observation values predicted by the model In this context it is used to determine the extent to which the model fits the data as well as whether removing some explanatory variables is possible without significantly harming the model s predictive ability In forecasting and prediction the Brier score is a measure of forecast skill based on MSE Loss function editSquared error loss is one of the most widely used loss functions in statistics though its widespread use stems more from mathematical convenience than considerations of actual loss in applications Carl Friedrich Gauss who introduced the use of mean squared error was aware of its arbitrariness and was in agreement with objections to it on these grounds 3 The mathematical benefits of mean squared error are particularly evident in its use at analyzing the performance of linear regression as it allows one to partition the variation in a dataset into variation explained by the model and variation explained by randomness Criticism edit The use of mean squared error without question has been criticized by the decision theorist James Berger Mean squared error is the negative of the expected value of one specific utility function the quadratic utility function which may not be the appropriate utility function to use under a given set of circumstances There are however some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application 10 Like variance mean squared error has the disadvantage of heavily weighting outliers 11 This is a result of the squaring of each term which effectively weights large errors more heavily than small ones This property undesirable in many applications has led researchers to use alternatives such as the mean absolute error or those based on the median See also editBias variance tradeoff Hodges estimator James Stein estimator Mean percentage error Mean square quantization error Mean square weighted deviation Mean squared displacement Mean squared prediction error Minimum mean square error Minimum mean squared error estimator Overfitting Peak signal to noise ratioNotes edit This can be proved by Jensen s inequality as follows The fourth central moment is an upper bound for the square of variance so that the least value for their ratio is one therefore the least value for the excess kurtosis is 2 achieved for instance by a Bernoulli with p 1 2 References edit a b Mean Squared Error MSE www probabilitycourse com Retrieved 2020 09 12 Bickel Peter J Doksum Kjell A 2015 Mathematical Statistics Basic Ideas and Selected Topics Vol I Second ed p 20 If we use quadratic loss our risk function is called the mean squared error MSE a b Lehmann E L Casella George 1998 Theory of Point Estimation 2nd ed New York Springer ISBN 978 0 387 98502 2 MR 1639875 Gareth James Witten Daniela Hastie Trevor Tibshirani Rob 2021 An Introduction to Statistical Learning with Applications in R Springer ISBN 978 1071614174 Wackerly Dennis Mendenhall William Scheaffer Richard L 2008 Mathematical Statistics with Applications 7 ed Belmont CA USA Thomson Higher Education ISBN 978 0 495 38508 0 A modern introduction to probability and statistics understanding why and how Dekking Michel 1946 London Springer 2005 ISBN 978 1 85233 896 1 OCLC 262680588 a href Template Cite book html title Template Cite book cite book a CS1 maint others link Steel R G D and Torrie J H Principles and Procedures of Statistics with Special Reference to the Biological Sciences McGraw Hill 1960 page 288 Mood A Graybill F Boes D 1974 Introduction to the Theory of Statistics 3rd ed McGraw Hill p 229 DeGroot Morris H 1980 Probability and Statistics 2nd ed Addison Wesley Berger James O 1985 2 4 2 Certain Standard Loss Functions Statistical Decision Theory and Bayesian Analysis 2nd ed New York Springer Verlag p 60 ISBN 978 0 387 96098 2 MR 0804611 Bermejo Sergio Cabestany Joan 2001 Oriented principal component analysis for large margin classifiers Neural Networks 14 10 1447 1461 doi 10 1016 S0893 6080 01 00106 X PMID 11771723 Retrieved from https en wikipedia org w index php title Mean squared error amp oldid 1226060918, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.