fbpx
Wikipedia

Mean absolute scaled error

In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts. It is the mean absolute error of the forecast values, divided by the mean absolute error of the in-sample one-step naive forecast. It was proposed in 2005 by statistician Rob J. Hyndman and Professor of Decision Sciences Anne B. Koehler, who described it as a "generally applicable measurement of forecast accuracy without the problems seen in the other measurements."[1] The mean absolute scaled error has favorable properties when compared to other methods for calculating forecast errors, such as root-mean-square-deviation, and is therefore recommended for determining comparative accuracy of forecasts.[2]

Rationale edit

The mean absolute scaled error has the following desirable properties:[3]

  1. Scale invariance: The mean absolute scaled error is independent of the scale of the data, so can be used to compare forecasts across data sets with different scales.
  2. Predictable behavior as   : Percentage forecast accuracy measures such as the Mean absolute percentage error (MAPE) rely on division of  , skewing the distribution of the MAPE for values of   near or equal to 0. This is especially problematic for data sets whose scales do not have a meaningful 0, such as temperature in Celsius or Fahrenheit, and for intermittent demand data sets, where   occurs frequently.
  3. Symmetry: The mean absolute scaled error penalizes positive and negative forecast errors equally, and penalizes errors in large forecasts and small forecasts equally. In contrast, the MAPE and median absolute percentage error (MdAPE) fail both of these criteria, while the "symmetric" sMAPE and sMdAPE[4] fail the second criterion.
  4. Interpretability: The mean absolute scaled error can be easily interpreted, as values greater than one indicate that in-sample one-step forecasts from the naïve method perform better than the forecast values under consideration.
  5. Asymptotic normality of the MASE: The Diebold-Mariano test for one-step forecasts is used to test the statistical significance of the difference between two sets of forecasts.[5][6][7] To perform hypothesis testing with the Diebold-Mariano test statistic, it is desirable for  , where   is the value of the test statistic. The DM statistic for the MASE has been empirically shown to approximate this distribution, while the mean relative absolute error (MRAE), MAPE and sMAPE do not.[2]

Non seasonal time series edit

For a non-seasonal time series,[8] the mean absolute scaled error is estimated by

 [3]

where the numerator ej is the forecast error for a given period (with J, the number of forecasts), defined as the actual value (Yj) minus the forecast value (Fj) for that period: ej = Yj − Fj, and the denominator is the mean absolute error of the one-step "naive forecast method" on the training set (here defined as t = 1..T),[8] which uses the actual value from the prior period as the forecast: Ft = Yt−1[9]

Seasonal time series edit

For a seasonal time series, the mean absolute scaled error is estimated in a manner similar to the method for non-seasonal time series:

 [8]

The main difference with the method for non-seasonal time series, is that the denominator is the mean absolute error of the one-step "seasonal naive forecast method" on the training set,[8] which uses the actual value from the prior season as the forecast: Ft = Yt−m,[9] where m is the seasonal period.

This scale-free error metric "can be used to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series (a data set containing a large amount of zeros) because it never gives infinite or undefined values[1] except in the irrelevant case where all historical data are equal.[3]

When comparing forecasting methods, the method with the lowest MASE is the preferred method.

Non-time series data edit

For non-time series data, the mean of the data ( ) can be used as the "base" forecast.[10]

 

In this case the MASE is the Mean absolute error divided by the Mean Absolute Deviation.

See also edit

References edit

  1. ^ a b Hyndman, R. J. (2006). "Another look at measures of forecast accuracy", FORESIGHT Issue 4 June 2006, pg46 [1]
  2. ^ a b Franses, Philip Hans (2016-01-01). "A note on the Mean Absolute Scaled Error". International Journal of Forecasting. 32 (1): 20–22. doi:10.1016/j.ijforecast.2015.03.008. hdl:1765/78815.
  3. ^ a b c Hyndman, R. J. and Koehler A. B. (2006). "Another look at measures of forecast accuracy." International Journal of Forecasting volume 22 issue 4, pages 679-688. doi:10.1016/j.ijforecast.2006.03.001
  4. ^ Makridakis, Spyros (1993-12-01). "Accuracy measures: theoretical and practical concerns". International Journal of Forecasting. 9 (4): 527–529. doi:10.1016/0169-2070(93)90079-3.
  5. ^ Diebold, Francis X.; Mariano, Roberto S. (1995). "Comparing predictive accuracy". Journal of Business and Economic Statistics. 13 (3): 253–263. doi:10.1080/07350015.1995.10524599.
  6. ^ Diebold, Francis X.; Mariano, Roberto S. (2002). "Comparing predictive accuracy". Journal of Business and Economic Statistics. 20 (1): 134–144. doi:10.1198/073500102753410444.
  7. ^ Diebold, Francis X. (2015). "Comparing predictive accuracy, twenty years later: A personal perspective on the use and abuse of Diebold–Mariano tests" (PDF). Journal of Business and Economic Statistics. 33 (1): 1. doi:10.1080/07350015.2014.983236.
  8. ^ a b c d "2.5 Evaluating forecast accuracy | OTexts". www.otexts.org. Retrieved 2016-05-15.
  9. ^ a b Hyndman, Rob et al, Forecasting with Exponential Smoothing: The State Space Approach, Berlin: Springer-Verlag, 2008. ISBN 978-3-540-71916-8.
  10. ^ Hyndman, Rob. "Alternative to MAPE when the data is not a time series". Cross Validated. Retrieved 2022-10-11.

mean, absolute, scaled, error, statistics, mean, absolute, scaled, error, mase, measure, accuracy, forecasts, mean, absolute, error, forecast, values, divided, mean, absolute, error, sample, step, naive, forecast, proposed, 2005, statistician, hyndman, profess. In statistics the mean absolute scaled error MASE is a measure of the accuracy of forecasts It is the mean absolute error of the forecast values divided by the mean absolute error of the in sample one step naive forecast It was proposed in 2005 by statistician Rob J Hyndman and Professor of Decision Sciences Anne B Koehler who described it as a generally applicable measurement of forecast accuracy without the problems seen in the other measurements 1 The mean absolute scaled error has favorable properties when compared to other methods for calculating forecast errors such as root mean square deviation and is therefore recommended for determining comparative accuracy of forecasts 2 Contents 1 Rationale 2 Non seasonal time series 3 Seasonal time series 4 Non time series data 5 See also 6 ReferencesRationale editThe mean absolute scaled error has the following desirable properties 3 Scale invariance The mean absolute scaled error is independent of the scale of the data so can be used to compare forecasts across data sets with different scales Predictable behavior as y t 0 displaystyle y t rightarrow 0 nbsp Percentage forecast accuracy measures such as the Mean absolute percentage error MAPE rely on division of y t displaystyle y t nbsp skewing the distribution of the MAPE for values of y t displaystyle y t nbsp near or equal to 0 This is especially problematic for data sets whose scales do not have a meaningful 0 such as temperature in Celsius or Fahrenheit and for intermittent demand data sets where y t 0 displaystyle y t 0 nbsp occurs frequently Symmetry The mean absolute scaled error penalizes positive and negative forecast errors equally and penalizes errors in large forecasts and small forecasts equally In contrast the MAPE and median absolute percentage error MdAPE fail both of these criteria while the symmetric sMAPE and sMdAPE 4 fail the second criterion Interpretability The mean absolute scaled error can be easily interpreted as values greater than one indicate that in sample one step forecasts from the naive method perform better than the forecast values under consideration Asymptotic normality of the MASE The Diebold Mariano test for one step forecasts is used to test the statistical significance of the difference between two sets of forecasts 5 6 7 To perform hypothesis testing with the Diebold Mariano test statistic it is desirable for D M N 0 1 displaystyle DM sim N 0 1 nbsp where D M displaystyle DM nbsp is the value of the test statistic The DM statistic for the MASE has been empirically shown to approximate this distribution while the mean relative absolute error MRAE MAPE and sMAPE do not 2 Non seasonal time series editFor a non seasonal time series 8 the mean absolute scaled error is estimated by M A S E m e a n e j 1 T 1 t 2 T Y t Y t 1 1 J j e j 1 T 1 t 2 T Y t Y t 1 displaystyle mathrm MASE mathrm mean left frac left e j right frac 1 T 1 sum t 2 T left Y t Y t 1 right right frac frac 1 J sum j left e j right frac 1 T 1 sum t 2 T left Y t Y t 1 right nbsp 3 where the numerator ej is the forecast error for a given period with J the number of forecasts defined as the actual value Yj minus the forecast value Fj for that period ej Yj Fj and the denominator is the mean absolute error of the one step naive forecast method on the training set here defined as t 1 T 8 which uses the actual value from the prior period as the forecast Ft Yt 1 9 Seasonal time series editFor a seasonal time series the mean absolute scaled error is estimated in a manner similar to the method for non seasonal time series M A S E m e a n e j 1 T m t m 1 T Y t Y t m 1 J j e j 1 T m t m 1 T Y t Y t m displaystyle mathrm MASE mathrm mean left frac left e j right frac 1 T m sum t m 1 T left Y t Y t m right right frac frac 1 J sum j left e j right frac 1 T m sum t m 1 T left Y t Y t m right nbsp 8 The main difference with the method for non seasonal time series is that the denominator is the mean absolute error of the one step seasonal naive forecast method on the training set 8 which uses the actual value from the prior season as the forecast Ft Yt m 9 where m is the seasonal period This scale free error metric can be used to compare forecast methods on a single series and also to compare forecast accuracy between series This metric is well suited to intermittent demand series a data set containing a large amount of zeros because it never gives infinite or undefined values 1 except in the irrelevant case where all historical data are equal 3 When comparing forecasting methods the method with the lowest MASE is the preferred method Non time series data editFor non time series data the mean of the data Y displaystyle bar Y nbsp can be used as the base forecast 10 M A S E m e a n e j 1 J j 1 J Y j Y 1 J j e j 1 J j Y j Y displaystyle mathrm MASE mathrm mean left frac left e j right frac 1 J sum j 1 J left Y j bar Y right right frac frac 1 J sum j left e j right frac 1 J sum j left Y j bar Y right nbsp In this case the MASE is the Mean absolute error divided by the Mean Absolute Deviation See also editMean squared error Mean absolute error Mean absolute percentage error Root mean square deviation Test set Fraction of variance unexplainedReferences edit a b Hyndman R J 2006 Another look at measures of forecast accuracy FORESIGHT Issue 4 June 2006 pg46 1 a b Franses Philip Hans 2016 01 01 A note on the Mean Absolute Scaled Error International Journal of Forecasting 32 1 20 22 doi 10 1016 j ijforecast 2015 03 008 hdl 1765 78815 a b c Hyndman R J and Koehler A B 2006 Another look at measures of forecast accuracy International Journal of Forecasting volume 22 issue 4 pages 679 688 doi 10 1016 j ijforecast 2006 03 001 Makridakis Spyros 1993 12 01 Accuracy measures theoretical and practical concerns International Journal of Forecasting 9 4 527 529 doi 10 1016 0169 2070 93 90079 3 Diebold Francis X Mariano Roberto S 1995 Comparing predictive accuracy Journal of Business and Economic Statistics 13 3 253 263 doi 10 1080 07350015 1995 10524599 Diebold Francis X Mariano Roberto S 2002 Comparing predictive accuracy Journal of Business and Economic Statistics 20 1 134 144 doi 10 1198 073500102753410444 Diebold Francis X 2015 Comparing predictive accuracy twenty years later A personal perspective on the use and abuse of Diebold Mariano tests PDF Journal of Business and Economic Statistics 33 1 1 doi 10 1080 07350015 2014 983236 a b c d 2 5 Evaluating forecast accuracy OTexts www otexts org Retrieved 2016 05 15 a b Hyndman Rob et al Forecasting with Exponential Smoothing The State Space Approach Berlin Springer Verlag 2008 ISBN 978 3 540 71916 8 Hyndman Rob Alternative to MAPE when the data is not a time series Cross Validated Retrieved 2022 10 11 Retrieved from https en wikipedia org w index php title Mean absolute scaled error amp oldid 1182147983, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.