Developed by Ronald Fisher in 1921, ANOVA is a widely used statistical method for testing statistical differences between two or more means by analyzing variance1. It is more generalized than p-test, which tests two means at a time. The non-specific null hypothesis claims that all population means are equal. When the null hypothesis is rejected, one concludes that at least one population mean is different from at least one other mean. It is used to test general rather than specific differences, since ANOVA doesn’t not reveal which means are different from which. The Tukey HSD test offers more specific differences, but it’s not as commonly used as ANOVA.

Assumptions made by ANOVA:

  • Homogeneity of variance: the populations have the same variance.
  • Normality: the populations are normally distributed.
  • Independence: Each value is sampled independently from each other value. (If a subject provides two scores, then the values are not independent.)

Key concepts (Jargons)

Factor and level

Factors refers to the number of independent variables that the experiment wants to investigate. Level refers to the number of different settings within the factor. For example, one study can have three levels of age group within the age factor, or five different level of dosage within measuring the effect of a drug, or two levels within the gender factor(or three levels for non-binary as well). You get the idea.

An ANOVA conducted on a design in which there is only one factor is called a one-way ANOVA or a univariate ANOVA. If an experiment has two (or more) factors, then the ANOVA is called a two-way ANOVA or a multivariate ANOVA.

When all combinations of the levels are included, the design is called a factorial design. For example, a concise way of describing a double factor design is as a Gender (2) x Age (3) factorial design with the number of levels in parentheses.

MSE and MSB

One estimate is called the mean square error (MSE) and is based on differences among scores within the groups. MSE estimates \(\sigma^2\) regardless of whether the null hypothesis is true (the population means are equal). The second estimate is called the mean square between (MSB) and is based on differences among the sample means. MSB only estimates \(\sigma^2\) if the population means are equal.

A standard method of determining whether the difference is the F ratio between MSB to MSE. By comparing the obtained F ratio in the F distribution, the probability density at that F ratio is the statistical value \(p\) that determines whether the null hypothesis can be rejected (e.g. \(p < \alpha = 0.05\)).

F distribution

The shape of the F distribution depends on two degrees of freedom that relates to the number of groups / conditions, \(k\), and the sample size, \(N\) ( \(N = nk\) where \(n\) = number of data per group). df of numerator (MSB) = \(k-1\) and df of denominator (MSE) = \(N-k = k(n-1)\) . We often represent the distribution as a function, \(F(dfn,dfd)\) . The numerator is based on differences between groups, i.e. variance of the group means. The denominator is based on differences within groups. The F distribution has a long tail to the right which means it has a positive skew. E.g. 7 groups with 15 participants in each group: \(dfn\) = 6, \(dfd\) = 7(15-1) = 98.

Calculations for one-way ANOVA

Sum of squares

The between group sum of squares is the sum of squared differences between each group mean and the overall mean (Grand Mean, GM) multiply by \(n_i\) , number of data in the respective factor.

\[SSQ_{condition} = n_1(Mean_1 - GM)^2 + n_2(Mean_2 -GM)^2 + ... + n_k(Mean_k - GM)^2\]

The within group sum of squares is the sum of squared deviation of each data from its group mean. Data \(i\) in group \(k\) is index as \(X_ik\) , where \(i = 1,...,n\) .

\[SSQ_{error} =\sum^{n}_{i=1}(X_{i1} - M_1)^2 + \sum^{n}_{i=1}(X_{i2} -M_2)^2 + ... + \sum^{n}_{i=1}(X_{ik} - M_k)^2\]

Note that we can compute sum of squares error with the total sum of squares.

\[SSQ_{total} = \sum^{k}_{j=1}\sum^{n}_{i=1}(X_{ij}-GM)^2 = SSQ_{condition} + SSQ_{error}\]

MSE and MSB

Now we have calculated the sum of squares and degree of freedom, the remaining mean square is very easy to compute.

\[MSB = SSQ_{condiiton}/dfn\] \[MSE = SSQ_{error}/dfd\]

Now you can go back to compute the F-ratio and compare the the value from the F-distribution. If the probability is lower than \(\alpha\) (usually 0.05), the null hypothesis can be rejected (the mean difference is significant).

Relationship to the t-test?

When we only test the significant between two means, the degree of freedom in the numerator = 2 - 1 = 1. The relationship to t-test is as follows:

\[f(1,dfd) = t^2(df)\]

where \(dfd\) is the degrees of freedom for the denominator 2 of the F test, and df is the degrees of freedom for the t test. \(dfd\) will always equal \(df\).

More: Multivariate ANOVA

We have just covered the basic idea of ANOVA and demonstrated how to compute an univariate ANOVA. One can extend an univariate ANOVA to a multivariate ANOVA (MANOVA). In MANOVA, there are two or more dependent variables (the responding variables to the change in independent variables), unlike only one dependent variable in univariate ANOVA. Of course one can use multiple univariate ANOVA for each dependent variables, but MANOVA has more statistical power. But multiple univariate ANOVA is sometimes preferred if the dependent variables are uncorrelated or highly correlated. I’ll reserve this topic sometimes in the future. Technologynetworks provides a nice comparison between the two.


  1. http://onlinestatbook.com/2/analysis_of_variance/anova.pdf 

  2. Note that \(dfd\) is sometimes referred to \(dfe\) for degrees of freedom error.