fbpx
Wikipedia

One-way analysis of variance

In statistics, one-way analysis of variance (or one-way ANOVA) is a technique to compare whether two samples' means are significantly different (using the F distribution). This analysis of variance technique requires a numeric response variable "Y" and a single explanatory variable "X", hence "one-way".[1]

The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. These estimates rely on various assumptions (see below). The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.[1]

Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA and t is given by F = t2. An extension of one-way ANOVA is two-way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable.

Assumptions edit

The results of a one-way ANOVA can be considered reliable as long as the following assumptions are met:

If data are ordinal, a non-parametric alternative to this test should be used such as Kruskal–Wallis one-way analysis of variance. If the variances are not known to be equal, a generalization of 2-sample Welch's t-test can be used.[2]

Departures from population normality edit

ANOVA is a relatively robust procedure with respect to violations of the normality assumption.[3]

The one-way ANOVA can be generalized to the factorial and multivariate layouts, as well as to the analysis of covariance.[clarification needed]

It is often stated in popular literature that none of these F-tests are robust when there are severe violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts.[4] Furthermore, it is also claimed that if the underlying assumption of homoscedasticity is violated, the Type I error properties degenerate much more severely.[5]

However, this is a misconception, based on work done in the 1950s and earlier. The first comprehensive investigation of the issue by Monte Carlo simulation was Donaldson (1966).[6] He showed that under the usual departures (positive skew, unequal variances) "the F-test is conservative", and so it is less likely than it should be to find that a variable is significant. However, as either the sample size or the number of cells increases, "the power curves seem to converge to that based on the normal distribution". Tiku (1971) found that "the non-normal theory power of F is found to differ from the normal theory power by a correction term which decreases sharply with increasing sample size."[7] The problem of non-normality, especially in large samples, is far less serious than popular articles would suggest.

The current view is that "Monte-Carlo studies were used extensively with normal distribution-based tests to determine how sensitive they are to violations of the assumption of normal distribution of the analyzed variables in the population. The general conclusion from these studies is that the consequences of such violations are less severe than previously thought. Although these conclusions should not entirely discourage anyone from being concerned about the normality assumption, they have increased the overall popularity of the distribution-dependent statistical tests in all areas of research."[8]

For nonparametric alternatives in the factorial layout, see Sawilowsky.[9] For more discussion see ANOVA on ranks.

The case of fixed effects, fully randomized experiment, unbalanced data edit

The model edit

The normal linear model describes treatment groups with probability distributions which are identically bell-shaped (normal) curves with different means. Thus fitting the models requires only the means of each treatment group and a variance calculation (an average variance within the treatment groups is used). Calculations of the means and the variance are performed as part of the hypothesis test.

The commonly used normal linear models for a completely randomized experiment are:[10]

  (the means model)

or

  (the effects model)

where

  is an index over experimental units
  is an index over treatment groups
  is the number of experimental units in the jth treatment group
  is the total number of experimental units
  are observations
  is the mean of the observations for the jth treatment group
  is the grand mean of the observations
  is the jth treatment effect, a deviation from the grand mean
 
 
 ,   are normally distributed zero-mean random errors.

The index   over the experimental units can be interpreted several ways. In some experiments, the same experimental unit is subject to a range of treatments;   may point to a particular unit. In others, each treatment group has a distinct set of experimental units;   may simply be an index into the  -th list.

The data and statistical summaries of the data edit

One form of organizing experimental observations   is with groups in columns:

ANOVA data organization, Unbalanced, Single factor
Lists of Group Observations
         
1        
2        
3        
   
           
Group Summary Statistics Grand Summary Statistics
# Observed             # Observed  
Sum   Sum  
Sum Sq   Sum Sq  
Mean           Mean  
Variance           Variance  

Comparing model to summaries:   and  . The grand mean and grand variance are computed from the grand sums, not from group means and variances.

The hypothesis test edit

Given the summary statistics, the calculations of the hypothesis test are shown in tabular form. While two columns of SS are shown for their explanatory value, only one column is required to display results.

ANOVA table for fixed model, single factor, fully randomized experiment
Source of variation Sums of squares Sums of squares Degrees of freedom Mean square F
Explanatory SS[11] Computational SS[12] DF MS
Treatments          
Error        
Total      

  is the estimate of variance corresponding to   of the model.

Analysis summary edit

The core ANOVA analysis consists of a series of calculations. The data is collected in tabular form. Then

  • Each treatment group is summarized by the number of experimental units, two sums, a mean and a variance. The treatment group summaries are combined to provide totals for the number of units and the sums. The grand mean and grand variance are computed from the grand sums. The treatment and grand means are used in the model.
  • The three DFs and SSs are calculated from the summaries. Then the MSs are calculated and a ratio determines F.
  • A computer typically determines a p-value from F which determines whether treatments produce significantly different results. If the result is significant, then the model provisionally has validity.

If the experiment is balanced, all of the   terms are equal so the SS equations simplify.

In a more complex experiment, where the experimental units (or environmental effects) are not homogeneous, row statistics are also used in the analysis. The model includes terms dependent on  . Determining the extra terms reduces the number of degrees of freedom available.

Example edit

Consider an experiment to study the effect of three different levels of a factor on a response (e.g. three levels of a fertilizer on plant growth). If we had 6 observations for each level, we could write the outcome of the experiment in a table like this, where a1, a2, and a3 are the three levels of the factor being studied.

a1 a2 a3
6 8 13
8 12 9
4 9 11
5 11 8
3 6 7
4 8 12

The null hypothesis, denoted H0, for the overall F-test for this experiment would be that all three levels of the factor produce the same response, on average. To calculate the F-ratio:

Step 1: Calculate the mean within each group:

 

Step 2: Calculate the overall mean:

 
where a is the number of groups.

Step 3: Calculate the "between-group" sum of squared differences:

 

where n is the number of data values per group.

The between-group degrees of freedom is one less than the number of groups

 

so the between-group mean square value is

 

Step 4: Calculate the "within-group" sum of squares. Begin by centering the data in each group

a1 a2 a3
6−5=1 8−9=−1 13−10=3
8−5=3 12−9=3 9−10=−1
4−5=−1 9−9=0 11−10=1
5−5=0 11−9=2 8−10=−2
3−5=−2 6−9=−3 7−10=−3
4−5=−1 8−9=−1 12−10=2

The within-group sum of squares is the sum of squares of all 18 values in this table

 

The within-group degrees of freedom is

 

Thus the within-group mean square value is

 

Step 5: The F-ratio is

 

The critical value is the number that the test statistic must exceed to reject the test. In this case, Fcrit(2,15) = 3.68 at α = 0.05. Since F=9.3 > 3.68, the results are significant at the 5% significance level. One would not accept the null hypothesis, concluding that there is strong evidence that the expected values in the three groups differ. The p-value for this test is 0.002.

After performing the F-test, it is common to carry out some "post-hoc" analysis of the group means. In this case, the first two group means differ by 4 units, the first and third group means differ by 5 units, and the second and third group means differ by only 1 unit. The standard error of each of these differences is  . Thus the first group is strongly different from the other groups, as the mean difference is more than 3 times the standard error, so we can be highly confident that the population mean of the first group differs from the population means of the other groups. However, there is no evidence that the second and third groups have different population means from each other, as their mean difference of one unit is comparable to the standard error.

Note F(xy) denotes an F-distribution cumulative distribution function with x degrees of freedom in the numerator and y degrees of freedom in the denominator.

See also edit

Notes edit

  1. ^ a b Howell, David (2002). Statistical Methods for Psychology. Duxbury. pp. 324–325. ISBN 0-534-37770-X.
  2. ^ Welch, B. L. (1951). "On the Comparison of Several Mean Values: An Alternative Approach". Biometrika. 38 (3/4): 330–336. doi:10.2307/2332579. JSTOR 2332579.
  3. ^ Kirk, RE (1995). Experimental Design: Procedures For The Behavioral Sciences (3 ed.). Pacific Grove, CA, USA: Brooks/Cole.
  4. ^ Blair, R. C. (1981). "A reaction to 'Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance.'". Review of Educational Research. 51 (4): 499–507. doi:10.3102/00346543051004499.
  5. ^ Randolf, E. A.; Barcikowski, R. S. (1989). "Type I error rate when real study values are used as population parameters in a Monte Carlo study". Paper Presented at the 11th Annual Meeting of the Mid-Western Educational Research Association, Chicago.
  6. ^ Donaldson, Theodore S. (1966). "Power of the F-Test for Nonnormal Distributions and Unequal Error Variances". Paper Prepared for United States Air Force Project RAND.
  7. ^ Tiku, M. L. (1971). "Power Function of the F-Test Under Non-Normal Situations". Journal of the American Statistical Association. 66 (336): 913–916. doi:10.1080/01621459.1971.10482371.
  8. ^ . Archived from the original on 2018-12-04. Retrieved 2016-09-22.
  9. ^ Sawilowsky, S. (1990). "Nonparametric tests of interaction in experimental design". Review of Educational Research. 60 (1): 91–126. doi:10.3102/00346543060001091.
  10. ^ Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. p. Section 3–2. ISBN 9780471316497.
  11. ^ Moore, David S.; McCabe, George P. (2003). Introduction to the Practice of Statistics (4th ed.). W H Freeman & Co. p. 764. ISBN 0716796570.
  12. ^ Winkler, Robert L.; Hays, William L. (1975). Statistics: Probability, Inference, and Decision (2nd ed.). New York: Holt, Rinehart and Winston. p. 761.

Further reading edit

analysis, variance, statistics, analysis, variance, anova, technique, compare, whether, samples, means, significantly, different, using, distribution, this, analysis, variance, technique, requires, numeric, response, variable, single, explanatory, variable, he. In statistics one way analysis of variance or one way ANOVA is a technique to compare whether two samples means are significantly different using the F distribution This analysis of variance technique requires a numeric response variable Y and a single explanatory variable X hence one way 1 The ANOVA tests the null hypothesis which states that samples in all groups are drawn from populations with the same mean values To do this two estimates are made of the population variance These estimates rely on various assumptions see below The ANOVA produces an F statistic the ratio of the variance calculated among the means to the variance within the samples If the group means are drawn from populations with the same mean values the variance between the group means should be lower than the variance of the samples following the central limit theorem A higher ratio therefore implies that the samples were drawn from populations with different mean values 1 Typically however the one way ANOVA is used to test for differences among at least three groups since the two group case can be covered by a t test Gosset 1908 When there are only two means to compare the t test and the F test are equivalent the relation between ANOVA and t is given by F t2 An extension of one way ANOVA is two way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable Contents 1 Assumptions 1 1 Departures from population normality 2 The case of fixed effects fully randomized experiment unbalanced data 2 1 The model 2 2 The data and statistical summaries of the data 2 3 The hypothesis test 2 4 Analysis summary 3 Example 4 See also 5 Notes 6 Further readingAssumptions editThe results of a one way ANOVA can be considered reliable as long as the following assumptions are met Response variable residuals are normally distributed or approximately normally distributed Variances of populations are equal Responses for a given group are independent and identically distributed normal random variables not a simple random sample SRS If data are ordinal a non parametric alternative to this test should be used such as Kruskal Wallis one way analysis of variance If the variances are not known to be equal a generalization of 2 sample Welch s t test can be used 2 Departures from population normality edit ANOVA is a relatively robust procedure with respect to violations of the normality assumption 3 The one way ANOVA can be generalized to the factorial and multivariate layouts as well as to the analysis of covariance clarification needed It is often stated in popular literature that none of these F tests are robust when there are severe violations of the assumption that each population follows the normal distribution particularly for small alpha levels and unbalanced layouts 4 Furthermore it is also claimed that if the underlying assumption of homoscedasticity is violated the Type I error properties degenerate much more severely 5 However this is a misconception based on work done in the 1950s and earlier The first comprehensive investigation of the issue by Monte Carlo simulation was Donaldson 1966 6 He showed that under the usual departures positive skew unequal variances the F test is conservative and so it is less likely than it should be to find that a variable is significant However as either the sample size or the number of cells increases the power curves seem to converge to that based on the normal distribution Tiku 1971 found that the non normal theory power of F is found to differ from the normal theory power by a correction term which decreases sharply with increasing sample size 7 The problem of non normality especially in large samples is far less serious than popular articles would suggest The current view is that Monte Carlo studies were used extensively with normal distribution based tests to determine how sensitive they are to violations of the assumption of normal distribution of the analyzed variables in the population The general conclusion from these studies is that the consequences of such violations are less severe than previously thought Although these conclusions should not entirely discourage anyone from being concerned about the normality assumption they have increased the overall popularity of the distribution dependent statistical tests in all areas of research 8 For nonparametric alternatives in the factorial layout see Sawilowsky 9 For more discussion see ANOVA on ranks The case of fixed effects fully randomized experiment unbalanced data editThe model edit The normal linear model describes treatment groups with probability distributions which are identically bell shaped normal curves with different means Thus fitting the models requires only the means of each treatment group and a variance calculation an average variance within the treatment groups is used Calculations of the means and the variance are performed as part of the hypothesis test The commonly used normal linear models for a completely randomized experiment are 10 y i j m j e i j displaystyle y i j mu j varepsilon i j nbsp the means model or y i j m t j e i j displaystyle y i j mu tau j varepsilon i j nbsp the effects model where i 1 I displaystyle i 1 dotsc I nbsp is an index over experimental units j 1 J displaystyle j 1 dotsc J nbsp is an index over treatment groups I j displaystyle I j nbsp is the number of experimental units in the jth treatment group I j I j displaystyle I sum j I j nbsp is the total number of experimental units y i j displaystyle y i j nbsp are observations m j displaystyle mu j nbsp is the mean of the observations for the jth treatment group m displaystyle mu nbsp is the grand mean of the observations t j displaystyle tau j nbsp is the jth treatment effect a deviation from the grand mean t j 0 displaystyle sum tau j 0 nbsp m j m t j displaystyle mu j mu tau j nbsp e N 0 s 2 displaystyle varepsilon thicksim N 0 sigma 2 nbsp e i j displaystyle varepsilon i j nbsp are normally distributed zero mean random errors The index i displaystyle i nbsp over the experimental units can be interpreted several ways In some experiments the same experimental unit is subject to a range of treatments i displaystyle i nbsp may point to a particular unit In others each treatment group has a distinct set of experimental units i displaystyle i nbsp may simply be an index into the j displaystyle j nbsp th list The data and statistical summaries of the data edit One form of organizing experimental observations y i j displaystyle y ij nbsp is with groups in columns ANOVA data organization Unbalanced Single factor Lists of Group ObservationsI 1 displaystyle I 1 nbsp I 2 displaystyle I 2 nbsp I 3 displaystyle I 3 nbsp displaystyle dotso nbsp I j displaystyle I j nbsp 1 y 11 displaystyle y 11 nbsp y 12 displaystyle y 12 nbsp y 13 displaystyle y 13 nbsp y 1 j displaystyle y 1j nbsp 2 y 21 displaystyle y 21 nbsp y 22 displaystyle y 22 nbsp y 23 displaystyle y 23 nbsp y 2 j displaystyle y 2j nbsp 3 y 31 displaystyle y 31 nbsp y 32 displaystyle y 32 nbsp y 33 displaystyle y 33 nbsp y 3 j displaystyle y 3j nbsp displaystyle vdots nbsp displaystyle vdots nbsp i displaystyle i nbsp y i 1 displaystyle y i1 nbsp y i 2 displaystyle y i2 nbsp y i 3 displaystyle y i3 nbsp displaystyle dotso nbsp y i j displaystyle y ij nbsp Group Summary Statistics Grand Summary Statistics Observed I 1 displaystyle I 1 nbsp I 2 displaystyle I 2 nbsp displaystyle dotso nbsp I j displaystyle I j nbsp displaystyle dotso nbsp I J displaystyle I J nbsp Observed I I j displaystyle I sum I j nbsp Sum i y i j displaystyle sum i y ij nbsp Sum j i y i j displaystyle sum j sum i y ij nbsp Sum Sq i y i j 2 displaystyle sum i y ij 2 nbsp Sum Sq j i y i j 2 displaystyle sum j sum i y ij 2 nbsp Mean m 1 displaystyle m 1 nbsp displaystyle dotso nbsp m j displaystyle m j nbsp displaystyle dotso nbsp m J displaystyle m J nbsp Mean m displaystyle m nbsp Variance s 1 2 displaystyle s 1 2 nbsp displaystyle dotso nbsp s j 2 displaystyle s j 2 nbsp displaystyle dotso nbsp s J 2 displaystyle s J 2 nbsp Variance s 2 displaystyle s 2 nbsp Comparing model to summaries m m displaystyle mu m nbsp and m j m j displaystyle mu j m j nbsp The grand mean and grand variance are computed from the grand sums not from group means and variances The hypothesis test edit Given the summary statistics the calculations of the hypothesis test are shown in tabular form While two columns of SS are shown for their explanatory value only one column is required to display results ANOVA table for fixed model single factor fully randomized experiment Source of variation Sums of squares Sums of squares Degrees of freedom Mean square FExplanatory SS 11 Computational SS 12 DF MSTreatments T r e a t m e n t s I j m j m 2 displaystyle sum Treatments I j m j m 2 nbsp j i y i j 2 I j j i y i j 2 I displaystyle sum j frac sum i y ij 2 I j frac sum j sum i y ij 2 I nbsp J 1 displaystyle J 1 nbsp S S T r e a t m e n t D F T r e a t m e n t displaystyle frac SS Treatment DF Treatment nbsp M S T r e a t m e n t M S E r r o r displaystyle frac MS Treatment MS Error nbsp Error T r e a t m e n t s I j 1 s j 2 displaystyle sum Treatments I j 1 s j 2 nbsp j i y i j 2 j i y i j 2 I j displaystyle sum j sum i y ij 2 sum j frac sum i y ij 2 I j nbsp I J displaystyle I J nbsp S S E r r o r D F E r r o r displaystyle frac SS Error DF Error nbsp Total O b s e r v a t i o n s y i j m 2 displaystyle sum Observations y ij m 2 nbsp j i y i j 2 j i y i j 2 I displaystyle sum j sum i y ij 2 frac sum j sum i y ij 2 I nbsp I 1 displaystyle I 1 nbsp M S E r r o r displaystyle MS Error nbsp is the estimate of variance corresponding to s 2 displaystyle sigma 2 nbsp of the model Analysis summary edit The core ANOVA analysis consists of a series of calculations The data is collected in tabular form Then Each treatment group is summarized by the number of experimental units two sums a mean and a variance The treatment group summaries are combined to provide totals for the number of units and the sums The grand mean and grand variance are computed from the grand sums The treatment and grand means are used in the model The three DFs and SSs are calculated from the summaries Then the MSs are calculated and a ratio determines F A computer typically determines a p value from F which determines whether treatments produce significantly different results If the result is significant then the model provisionally has validity If the experiment is balanced all of the I j displaystyle I j nbsp terms are equal so the SS equations simplify In a more complex experiment where the experimental units or environmental effects are not homogeneous row statistics are also used in the analysis The model includes terms dependent on i displaystyle i nbsp Determining the extra terms reduces the number of degrees of freedom available Example editConsider an experiment to study the effect of three different levels of a factor on a response e g three levels of a fertilizer on plant growth If we had 6 observations for each level we could write the outcome of the experiment in a table like this where a1 a2 and a3 are the three levels of the factor being studied a1 a2 a36 8 138 12 94 9 115 11 83 6 74 8 12The null hypothesis denoted H0 for the overall F test for this experiment would be that all three levels of the factor produce the same response on average To calculate the F ratio Step 1 Calculate the mean within each group Y 1 1 6 Y 1 i 6 8 4 5 3 4 6 5 Y 2 1 6 Y 2 i 8 12 9 11 6 8 6 9 Y 3 1 6 Y 3 i 13 9 11 8 7 12 6 10 displaystyle begin aligned overline Y 1 amp frac 1 6 sum Y 1i frac 6 8 4 5 3 4 6 5 overline Y 2 amp frac 1 6 sum Y 2i frac 8 12 9 11 6 8 6 9 overline Y 3 amp frac 1 6 sum Y 3i frac 13 9 11 8 7 12 6 10 end aligned nbsp Step 2 Calculate the overall mean Y i Y i a Y 1 Y 2 Y 3 a 5 9 10 3 8 displaystyle overline Y frac sum i overline Y i a frac overline Y 1 overline Y 2 overline Y 3 a frac 5 9 10 3 8 nbsp where a is the number of groups Step 3 Calculate the between group sum of squared differences S B n Y 1 Y 2 n Y 2 Y 2 n Y 3 Y 2 6 5 8 2 6 9 8 2 6 10 8 2 84 displaystyle begin aligned S B amp n overline Y 1 overline Y 2 n overline Y 2 overline Y 2 n overline Y 3 overline Y 2 8pt amp 6 5 8 2 6 9 8 2 6 10 8 2 84 end aligned nbsp where n is the number of data values per group The between group degrees of freedom is one less than the number of groups f b 3 1 2 displaystyle f b 3 1 2 nbsp so the between group mean square value is M S B 84 2 42 displaystyle MS B 84 2 42 nbsp Step 4 Calculate the within group sum of squares Begin by centering the data in each group a1 a2 a36 5 1 8 9 1 13 10 38 5 3 12 9 3 9 10 14 5 1 9 9 0 11 10 15 5 0 11 9 2 8 10 23 5 2 6 9 3 7 10 34 5 1 8 9 1 12 10 2The within group sum of squares is the sum of squares of all 18 values in this table S W 1 2 3 2 1 2 0 2 2 2 1 2 1 2 3 2 0 2 2 2 3 2 1 2 3 2 1 2 1 2 2 2 3 2 2 2 1 9 1 0 4 1 1 9 0 4 9 1 9 1 1 4 9 4 68 displaystyle begin aligned S W amp 1 2 3 2 1 2 0 2 2 2 1 2 amp 1 2 3 2 0 2 2 2 3 2 1 2 amp 3 2 1 2 1 2 2 2 3 2 2 2 amp 1 9 1 0 4 1 1 9 0 4 9 1 9 1 1 4 9 4 amp 68 end aligned nbsp The within group degrees of freedom is f W a n 1 3 6 1 15 displaystyle f W a n 1 3 6 1 15 nbsp Thus the within group mean square value is M S W S W f W 68 15 4 5 displaystyle MS W S W f W 68 15 approx 4 5 nbsp Step 5 The F ratio is F M S B M S W 42 4 5 9 3 displaystyle F frac MS B MS W approx 42 4 5 approx 9 3 nbsp The critical value is the number that the test statistic must exceed to reject the test In this case Fcrit 2 15 3 68 at a 0 05 Since F 9 3 gt 3 68 the results are significant at the 5 significance level One would not accept the null hypothesis concluding that there is strong evidence that the expected values in the three groups differ The p value for this test is 0 002 After performing the F test it is common to carry out some post hoc analysis of the group means In this case the first two group means differ by 4 units the first and third group means differ by 5 units and the second and third group means differ by only 1 unit The standard error of each of these differences is 4 5 6 4 5 6 1 2 displaystyle sqrt 4 5 6 4 5 6 1 2 nbsp Thus the first group is strongly different from the other groups as the mean difference is more than 3 times the standard error so we can be highly confident that the population mean of the first group differs from the population means of the other groups However there is no evidence that the second and third groups have different population means from each other as their mean difference of one unit is comparable to the standard error Note F x y denotes an F distribution cumulative distribution function with x degrees of freedom in the numerator and y degrees of freedom in the denominator See also editAnalysis of variance F test Includes a one way ANOVA example Mixed model Multivariate analysis of variance MANOVA Repeated measures ANOVA Two way ANOVA Welch s t testNotes edit a b Howell David 2002 Statistical Methods for Psychology Duxbury pp 324 325 ISBN 0 534 37770 X Welch B L 1951 On the Comparison of Several Mean Values An Alternative Approach Biometrika 38 3 4 330 336 doi 10 2307 2332579 JSTOR 2332579 Kirk RE 1995 Experimental Design Procedures For The Behavioral Sciences 3 ed Pacific Grove CA USA Brooks Cole Blair R C 1981 A reaction to Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance Review of Educational Research 51 4 499 507 doi 10 3102 00346543051004499 Randolf E A Barcikowski R S 1989 Type I error rate when real study values are used as population parameters in a Monte Carlo study Paper Presented at the 11th Annual Meeting of the Mid Western Educational Research Association Chicago Donaldson Theodore S 1966 Power of the F Test for Nonnormal Distributions and Unequal Error Variances Paper Prepared for United States Air Force Project RAND Tiku M L 1971 Power Function of the F Test Under Non Normal Situations Journal of the American Statistical Association 66 336 913 916 doi 10 1080 01621459 1971 10482371 Getting Started with Statistics Concepts Archived from the original on 2018 12 04 Retrieved 2016 09 22 Sawilowsky S 1990 Nonparametric tests of interaction in experimental design Review of Educational Research 60 1 91 126 doi 10 3102 00346543060001091 Montgomery Douglas C 2001 Design and Analysis of Experiments 5th ed New York Wiley p Section 3 2 ISBN 9780471316497 Moore David S McCabe George P 2003 Introduction to the Practice of Statistics 4th ed W H Freeman amp Co p 764 ISBN 0716796570 Winkler Robert L Hays William L 1975 Statistics Probability Inference and Decision 2nd ed New York Holt Rinehart and Winston p 761 Further reading editGeorge Casella 18 April 2008 Statistical design Springer ISBN 978 0 387 75965 4 Retrieved from https en wikipedia org w index php title One way analysis of variance amp oldid 1177239415, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.