Skip to Main Content

Different ANOVAs: Maths and Stats

Overview of Different ANOVAs

Always in statistical analysis, the conditions and characteristics of your study determine which statistical test is the most appropriate. For example, if you have more than two groups you wish to compare the means of, you can no longer use a t-test: instead, you may use an ANOVA.

ANOVA stands for ANalysis Of VAriance and is a type of parametric comparison test used when you wish to compare three or more groups. Like with t-tests, the independent variable is therefore the variable which consists of these groups. These can be:

  • the same group measured at three or more time points
  • three or more different groups
  • a mix of the two

ANOVAs are parametric tests, and therefore have the assumption that the data is normally distributed, homogeneous and independent. You must always check these assumptions before diving into analysis. If you have non-parametric data (decided because one or more of the parametric assumptions was violated), you will need to perform a non-parametric equivalent test.

There are many kinds of ANOVA tests, so pick which one is best for your situation.


Guide contents

The tabs of this guide will support you in performing different ANOVAs. The sections are organised as follows:

  • F tests - what F tests are
  • Recap: Parametric Assumptions - what it means to have parametric data
  • Repeated Measures ANOVA - for groups from the same population (within-subjects)
  • One-Way ANOVA - for groups from differing populations (between-subjects)
  • Factorial ANOVA - when the ANOVA has more than one independent variable
  • Mixed Measures ANOVA - when your groups are from the same population for some and differing populations for others
  • ANCOVA - Analysis of Covariance
  • MANOVA - Multivariate Analysis of Variance
  • MANCOVA - Multivariate Analysis of Covariance
  • Post-Hoc Tests - after the ANOVA

F

Unlike with t-tests, ANOVAs are types of F-tests, which means that they measure how well the different categories in the independent variable explain the variance of the dependent variable.


Interpreting F

The better the categories are at explaining the variance, the greater the relationship between the independent and dependent variables, and the greater the value that the F statistic will take. This means that a large F value indicates greater evidence of difference between the group means of the different categories.

If the categories are rubbish at explaining the variance of the dependent variable, the F value would be equal to or close to zero.


F or t?

Both F and t statistics are used to compare the means of groups.

Unlike with F, the t statistic is able to report directionality, which means we are able to hypothesise that one group is bigger/better/greater/etc. than the other group, and therefore reduce the probability of error by half. This is unable to happen with F, and is the reason why statistically significant ANOVA tests need to be followed up with a post-hoc test.

However, statisticians consider F to be more robust than t, which means that we can trust the accuracy of the result even when the underlying assumptions are violated, for example when the population variances are unequal. 

Parametricity

ANOVAs are parametric tests. Whereas definitions of parametricity vary across sources, in general what this means is that your data should be:

  • Normally distributed
  • Homogeneous
  • Independent

You must check these assumptions before attempting to perform an ANOVA (or during, depending on the software you use). This is because, if any of these assumptions fail, you cannot continue with these tests and must use a non-parametric equivalent.


Normally Distributed

Continuous data can be plotted in a histogram to display the shape the distribution takes. When this distribution is shown to be 'normal' we say that the data is 'normally distributed'.

A normally distributed set of data, with a normal curve placed on top to highlight this.

A Q-Q plot can also be used to check the distribution of your data.

Alternatively, instead of visually inspecting your data's distribution using a graph, you can use a test:

  • Shapiro-Wilk test
  • Anderson-Darling test.

Note that categorical data can never be normally distributed! This is because it is neither interval nor ratio data, and therefore does not make sense to check the distribution. Normality should be checked on your continuous data, e.g. measurements, discrete counts, etc.

If your data does not take the shape of the normal distribution, you can do either of two things:

  1. Use a non-parametric equivalent to the parametric test you wish to do
  2. Try and transform your data.

Homogeneity

Data which is homogeneous means that the groups contain roughly constant variance. You can test for homogeneity using:

  • Box plots
  • Bartlett's test
  • Levene's test.

If your data fails the homogeneity assumption, you need to use a non-parametric test equivalent to the one you wished to perform, otherwise your results will become untrustworthy.


Independence

Having independent data means that your data does not influence each other, so it is understandable that this should not happen in a hypothesis test! Independent data has no relationship between observations. This is controlled via your study design, and you can check for independence using:

  • Durbin-Watson test
  • Contingency table.

What is This ANOVA?

A repeated measures ANOVA is used to compare the group means of a within-subjects design experiment. This means that this ANOVA is not used for multiple different groups of participants, but rather for one set of participants subjected to the same treatments and measured at multiple time points. The name 'repeated measures' refers to the fact that the groups (or 'measures') are being repeated over time.

Therefore, this test can be thought of as an extension of the paired t-test, or the parametric equivalent to the Kruskal-Wallis H test, or Jonckheere-Terpstre test.


When to Perform a Repeated Measures ANOVA

If you wish to compare the means of three or more groups from the same population then you can use a repeated measures ANOVA.

In order to trust that the results of the test are accurate, in addition to the parametric assumptions, the data must also meet the assumption of sphericity. This means that the variances of the differences between all combinations of related groups are equal, and is therefore similar to homogeneity. Sphericity can be tested for using Mauchly's Test.

 

Example

For example, a medical researcher wishing to investigate if a course of a new iron supplement has an effect on patients' blood ferratin levels may measure these levels before the supplement, during the course of the supplements, and a week after the course has been completed. Here, the independent variable are the time groups (before, during, after) and the dependent variable is the patients' ferratin levels.

If the researcher was only measuring the ferratin levels before and after the course of iron supplements, a paired t-test would be a better test to perform compared to an ANOVA, as there are only two groups involved in the analysis.


SPSS How-To

In SPSS, have your data laid out in a way that your three or more groups exist in a respective number of columns, so that they are the respective number of variables. Make sure that your data is paired, so that each participant's results are in the same row.

When you are ready to perform the test:

  • Go to Analyse at the top, then choose General Linear Model and Repeated Measures... 
  • In the dialogue box, type in the name you wish to give to your independent variable into the Within-Subject Factor Name box.
    • For example, if your groups are to do with time, you can type 'Time' in here.
  • Underneath this, type the number of groups you have into the Number of Levels box and click Add.
  • Type in the name you wish to give to your dependent variable into the Measure Name box..
    • For example, if your measurements are to do with blood ferratin levels, you can type 'Ferratin'.
  • Click Add, and then Define.
  • In the new dialogue box, transfer your group variables into the Within-Subjects Variables box by either using the arrow button or by clicking-and-dragging them over, then click Plots.
  • In the new dialogue box, transfer your named within-subjects factor (in the example, this was 'Time') from the Factors box into the Horizontal Axis box, either using the arrow button or by clicking-and-dragging. Click Add, then Continue.
  • Click EM Means to bring up a new dialogue box, and transfer the within-subjects factor (again, from the example this was 'Time') from the Factor(s) and Factor Interactions box into the Display Means for: box. Make sure the tick-box for Compare main effects is checked and select 'Bonferroni' from the drop-down named Confidence interval adjustment.
  • Click Continue to be brought back to the initial dialogue box.
  • Now, click Options, and in the corresponding dialogue box ensure the Descriptive statistics and Estimated of effect size tick-boxes are selected. Click Continue.
  • Click OK.

 

Your output will consist of four tables and one plot:

  1. The first table is the 'Within-Subjects Factors' table, which confirms which groups exist in the independent variable. 
  2. The second table, 'Descriptive Statistics', contains the mean, standard deviation and sample size for each of these groups.
  3. The third table is the 'Tests of Within-Subjects Effects', which is used to tell if an overall significant difference between the means exists at different time points. For this, you should look at the row named 'Greenhouse-Geisser'.
  4. The final table is named 'Pairwise Comparisons' and is used to display the post-hoc results. See the final tab 'Post-Hoc Tests' in this guide for more information.
  5. The plot is the 'Estimated Marginal Means', which has the mean response for each group, adjusted for any other variables you may include in the model.

What is This ANOVA?

A one-way ANOVA is a type of ANOVA used to compare three or more distinct groups of participants, otherwise known as 'between-subjects' groups. Therefore, this ANOVA is best used when you have three or more separate groups of different participants you wish to comapre the group means of.

In this way, the one-way ANOVA is an expansion of the independent samples t-test.


When to Use a One-Way ANOVA

You can perform a one-way ANOVA if you have three or mor distinct groups of participants, patients, etc. and you wish to observe if a significant difference exists between the means of the groups. Therefore, your independent variable needs to encompass these groups. In fact, a one-way ANOVA can be used with only two groups, however this is more commonly performed with an independent samples t-test.

Your dependent variable should be the thing you are measuring and needs to be continuous (interval or ratio data). If your data is ordinal, consider using a  Kruskal-Wallis H test instead.

Similarly to a repeated measures ANOVA, your data needs to be parametric, observe sphericity and have no significant outliers.

 

Example

A study which investigates the differences in price of a household's primary vehicle by the income bracket of the household would be suitably analysed with a one-way ANOVA. The independent variable would be income brackets, as those are the groups your participants are in, and your dependent variable would be the vehicle price.

Notice that a one-way ANOVA is best for this scenario because the groups are distinct: no household can exist in more than one bracket! 


SPSS How-To

In SPSS, have your data laid out in a way that your independent variable consisting of your groups is in one column and your dependent variable consisting of your measurements is in another. Ensure that your categorical variable is properly coded, and that your data is paired, so that each participant's data are in the same row.

When you are ready to perform the test:

  • Go to Analyse in the top ribbon, then Compare Means and One-Way ANOVA.
  • In the dialogue box, transfer your dependent variable into the Dependent List box, using either the arrow button or by clicking-and-dragging it over, and do the same for your independent variable into the Factor box.
  • Click Post Hoc.
  • In the new dialogue box, click whichever post-hoc test you wish to use in the Equal Variances Assumed section (we recommend Tukey). 
  • Click Continue.
  • Click Options.
  • In the new dialogue box, ensure that the Descriptive tick-box is checked, as well as the Exclude cases analysis by analysis option below that.
  • Click Continue.
  • Click OK.

 

The output will consist of three tables:

  1. The 'Descriptives' table contains descriptive statistics on each of your groups.
  2. The 'ANOVA' table consists of the results of the ANOVA test. You can read the F value and significance level (p-value) in this table.
  3. The 'Multiple Comparisons' table contains the results of the post-hoc test. If your ANOVA was significant, this table will show you where the differences between the groups lie.

What is This ANOVA?

A factorial ANOVA allows for comparison in more complex analyses by extending the number of ways participants can be grouped. The word 'factorial' here refers to the fact that more than one independent variable is used. Therefore, a two-way, three-way or, by the more general term, a factorial ANOVA is used to compare groups which have been split into two, three or, generally, more than one independent variable.

The total number of treatments is obtained by multiplying the number of groups in each independent variable together. For example, a 2x2 ANOVA will yield 2² = 4 treatments, a 2x3 ANOVA will yield 2 x 3 = 6 treatments, and a 2x2x2 ANOVA will yield 2³ = 8 treatments.


When to Perform a Factorial ANOVA

If you have more than one independent variable which are all between-subjects you can use a factorial ANOVA.

Two independent variables would require a two-way ANOVA, three independent variables would require a three-way ANOVA, and so on.

Factorial ANOVA have the following assumptions:

  • Dependent variable is ratio or interval (e.g. is continuous)
  • Independence of observations
  • Homogeneity of variances for each combination of groups of the independent variables
  • The dependent variable is approximately normally distributed across each group of the independent variables
  • No significant outliers

 

Example

A researcher wishing to observe the effects of two pain medications being taken at the same time would use a two-way ANOVA. There would be two independent variables; one for each medicine, with each having two groups (real medicine and placebo). Therefore there are 2² = 4 treatments involved:

  1. Participants who receive both pain medications
  2. Participants who receive one medication and one placebo
  3. Participants who receive one placebo and one medication
  4. Participants who receive both placebos.

SPSS How-To

This how-to is specifically for a two-way ANOVA. If you have more than two independent variables, you will need to adjust accordingly. 

Lay your data in SPSS out so that your independent variables consisting of your groups are in two separate columns and your dependent variable consisting of your measurements is in another. Ensure that your data is paired, so that each participant's data are in the same row.

When you are ready to perform the test:

  • Go to Analyse in the top ribbon, then General Linear Model and select Univariate.
  • In the dialogue box, transfer your dependent variable into the Dependent Variable box, using either the arrow button or by clicking-and-dragging it over, and do the same for your independent variables into the Fixed Factor(s) box.
  • Click Plots, and in the new dialogue box transfer one of your independent variables into the Horizontal Axis box, and the other into the Separate Lines box, again by using either the arrow buttons or clicking-and-dragging. 
  • Click Add.
  • Repeat this process, this time with the independent variables the other way round (as in, the one you previously inputted into the Horizonal Axis box now needs to go into the Separate Lines box, and the other way round). Click Add when finished.
  • Click Continue.
  • Click EM Means, and in the new dialogue box, transfer your interaction term (denoted with the asterisk *) into the Display Means for box, again using either the arrow button or clicking-and-dragging.
  • Click Continue.
  • Click Post Hoc, and in the new dialogue box transfer one of your independent variables (the one you wish to compare the difference between groups according to the other independent variable) into the Post Hoc Tests for box.
  • In the Equal Variances Assumed section, select whichever post-hoc test you wish to use. 
  • Click Continue.
  • Click Options. In the new dialogue box, ensure that the Descriptive tick-box is checked.
  • Click Continue.
  • Click OK.

 

The output will then consist of the following tables:

  1. The first is the 'Descriptives' table, which contains descriptive statistics on each of your groups.
  2. The 'ANOVA' table consists of the results of the ANOVA test. You can read the F value and the significance level (p-value) in this table.
  3. The 'Multiple Comparisons' table contains the results of the post-hoc test. If your ANOVA was significant, this table will show you where the differences between the groups lie.

What is This ANOVA?

A mixed methods ANOVA is a type of ANOVA which can accommodate for two or more independent variables where at least one is between-subjects and at least one other is within-subjects. This means that researchers can have a more in-depth understanding about the variability in their data by isolating the effects of interventions across different groups.


When You Can Do a Mixed Measures ANOVA

The mixed-methods ANOVA is suitable for more complex research designs which include two or more independent variables, with a mix of between-subjects and within-subjects effects. 

In order to trust that the results of the test are accurate, and in addition to the parametric assumptions, this test requires that your data must also meet the assumption of sphericity. This means that the variances of the differences for your within-subjects factors are equal, and is therefore similar to homogeneity. Sphericity can be tested for using Mauchly's Test.

 

Example

For example, an agricultural researcher would use a mixed methods ANOVA to investigate the difference between an organic and artificial fertiliser on soil acidity amongst weather conditions. Therefore there are two independent variables: one is the between-subjects variable which is the fertiliser types, and the other is the within-subjects variable, which is the weather conditions. The dependent variable is soil acidity. 


SPSS How-To

In SPSS, arrange your data to represent both between-subject and within-subject factors, ensuring that categorical variables are coded properly. What this should look like therefore is:

  • One column for your participant identifier
  • One column (each) for your between-subjects independent variable(s)
  • One column for each group of your within-subjects independent variable(s) - therefore, if you have three groups in one within-subjects independent variable, you will require three columns of data.

When you are ready for analysis:

  • Go to Analyse at the top, then choose General Linear Model and Repeated Measures... 
  • In the dialogue box, type in the name you wish to give to your within-subjects independent variable into the Within-Subject Factor Name box.
    • For example, if your groups are to do with weather conditions, you can type 'Weather' in here.
  • Underneath this, type the number of groups you have into the Number of Levels box and click Add.
  • Type in the name you wish to give to your dependent variable into the Measure Name box..
    • For example, if your measurements are to do with soil acidity levels, you can type 'Soil'.
  • Click Add, and then Define.
  • In the new dialogue box, transfer your group variables into the Within-Subjects Variables box by either using the arrow button or by clicking-and-dragging them over.
    • What this means is, if you have noted three groups earlier in the number of levels of your within-subjects independent variable, you need to transfer these into the Within-Subjects Variables box.
  • Transfer your between-subjects variable(s) into the Between-Subjects Factor(s) box.
  • Click Plots, and in this new dialogue box, transfer your named within-subjects factor (in the example, this was 'Weather') from the Factors box into the Horizontal Axis box, either using the arrow button or by clicking-and-dragging. Then, transfer the between-subjects independent variable into the Separate Lines box and click Add.
    • Repeat this process with different within-subjects and between-subjects variables if you have more than one of each! Make sure you get each interaction combination.
  • Click Continue.
  • Click Post Hoc... to bring up a new dialogue box, and transfer the between-subjects variables into the Post Hoc Tests for: box, again either using the arrow button or clicking-and-dragging. 
  • Depending on the results of your homogeneity test you are either assuming equal variances or not. Select the post-hoc test which best suits your conditions.
    • If you are not sure of this, select one from both areas! For example, you can tick the tick-boxes for Tukey and Dunnett's T3, and your output will provide both. All you need to do then is read the output which is most appropriate for you, and ignore the output for the other.
  • Click Continue to be brought back to the initial dialogue box.
  • Click Save, and in the new dialogue box ensure that the Studentized tick-box is checked. Click Continue.
  • Click EM Means to bring up a new dialogue box, and transfer the independent variables and their interaction terms we formed earlier from the Factor(s) and Factor Interactions box into the Display Means for: box.
  • Make sure the tick-box for Compare main effects is checked and select 'Bonferroni' from the drop-down named Confidence interval adjustment.
  • Click Continue to be brought back to the initial dialogue box.
  • Click Options, and in the corresponding dialogue box ensure the Descriptive statisticsEstimated of effect size and Homogeneity tests tick-boxes are selected. Click Continue.
  • Click OK.

What is an ANCOVA?

ANCOVA stands for ANalysis OCOVAriance (yes, that is correct despite the acronym being inaccurate!) and is used for more complicated analyses, where a researcher suspects that an independent variable's impact on the dependent variable is being affected by a third 'covariate' variable. 

The ANCOVA works by controlling for the covariate variable using a regression analysis.


When to Perform an ANCOVA

Your study needs to consist of a categorical independent variable, a continuous dependent variable and a continuous covariate variable. The relationship between the dependent and covariate variables need to be linear (if this relationship is non-linear, consider a MANOVA instead).

ANCOVA, being a type of ANOVA, is a parametric test which means that the usual parametric assumptions hold, and there should also be no extreme outliers in the data.

ANCOVAs have the following assumptions:

  • Dependent variables are interval or ratio (e.g. are continuous)
  • Covariate variables are continuous
  • Dependent variables and covariates have a linear relationship between each pair of dependent and covariate, within each group of the independent variable(s)
  • Independence of observations
  • Homogeneity of variance of the dependent variable
  • Normality of the dependent variable across each group of the independent variable(s)
  • No significant outliers

Assumptions will need to be checked so that you know if an ANCOVA is the most appropriate test to perform.

 

Example

A researcher in Education who wishes to investigate the impact different study methods have on students' exam scores but also wants to account for the number of lectures those students attend can perform an ANCOVA. The independent variable would be the different study techniques, the dependent variable is the exam score and the covariate variable is the number of lectures attended.


SPSS How-To

The following guide is specifically for a one-way ANCOVA. A factorial ANCOVA, with more than one independent variable, will need to be performed differently according to the number of independent variables in the model.

In SPSS, have your data laid out in a way that your groups of your independent variable lie in one column, and the measurements of your dependent variable and covariate(s) exist in other columns. Label your columns as nominal/ordinal and scale respectively. Make sure that your data is paired, so that each participant's results exist in the same row.

When you are ready to perform the test:

  • Go to Analyse at the top, then choose General Linear Model and Univariate
  • In the dialogue box, transfer your dependent variable into the Dependent Variables box, your independent variable into the Fixed Factor(s) box, and your covariate variable(s) into the Covariate(s) box. Do this either by using the arrow buttons or by clicking-and-dragging the variables over.
  • Click Options
  • In the new dialogue box, select the tick-boxes for Descriptive statisticsEstimates of effect size and Homogeneity tests.
  • Click Continue.
  • Click EM Means.
  • In this new dialogue box, transfer over the independent variable from the Factor(s) and Factor Interactions box into the Display Means for: box. Tick the tick-box for Compare main effects and select 'Bonferroni' from the drop-down named Confidence interval adjustment.
  • Click Continue.
  • Click Plots and transfer your independent variable into the box labelled Horizontal Axis, then click Add.
  • Click Continue to be brought back to the original dialogue box.
  • Click OK.

 

The output this generates will consist of the following boxes and plots:

  1. The first table is the 'Descriptive Statistics' table, containing the mean, standard deviation and sample size for each group in the model.
  2. The second table is 'Levene's Test of Equality of Error Variances', which should be checked to ensure that the variances between groups are approximately equal.
  3. The third table is named 'Tests of Between-Subjects Effects', and displays different statistical information about the ANCOVA, including the effect size (given by the Partial Eta Squared column).
  4. The fourth table is the 'Estimates' table, which provides 95% confidence intervals for the standard error of each group.
  5. The last table is named 'Pairwise Comparisons', which compare the different groups involved in the model.
  6. The plots are the 'Estimated Marginal Means', which has the adjusted mean response for each group, controlling for the covariate.

What is a MANOVA?

MANOVA stands for Multivariate ANalysis Of VAriance, and the only way it differs to a regular ANOVA is that it allows for more than one dependent variable to be included in the analysis. Like factorial ANOVAs, MANOVAs are also able to have more than one independent variable present in the model.

MANOVA tests will indicate the differences in group means in the independent variable while considering the relationships between multiple dependent variables. In this way, one MANOVA test will provide more detail and nuance compared to performing multiple separate ANOVAs, each with different dependent variables.


When to Perform a MANOVA

Performing multiple ANOVA tests, where you have the same independent variable(s) but different dependent variables each time, increases the family-wise error rate, which means that you are at greater risk of making Type I errors. To avoid this, in this situation, a MANOVA is more desirable. In addition to being at a lower risk of Type I errors, it is also possible to assess relationships between dependent variables in terms of influence from the independent variable(s). It is good to do a MANOVA test when your dependent variables are correlated. 

MANOVA tests have the following assumptions:

  • Dependent variables are either interval or ratio data (e.g. are continuous)
  • Dependent variables have a linear relationship between each pair of dependent variables, within each group of the independent
  • Independence of observations
  • Multivariate homogeneity of covariances
  • Multivariate normality of residuals
  • No significant outliers
  • No multicollinearity

You should check for these assumptions before/whilst you perform your MANOVA test.

 

Example

A psychologist may wish to investigate the effects of three intervention techniques on subjects' psychological, cognitive and emotional ratings according to standardised scored, compared to a control group. The independent variable is the intervention, with four groups (the three interventions plus one control), and the three dependent variables are the scores for psychological, cognitive and emotional states of the subjects.


SPSS How-To

The following is to perform a one-way MANOVA.

In SPSS, have your data laid out in a way that your groups of your independent variable lie in one column, and the measurements of your dependent variables exist in other columns. Label your columns as nominal/ordinal and scale respectively. Make sure that your data is paired, so that each participant's results exist in the same row.

When you are ready to perform the test:

  • Go to Analyse at the top, then choose General Linear Model and Multivariate
  • In the dialogue box which appears, transfer your dependent variables into the Dependent Variables box, and your independent variable into the Fixed Factor(s) box. Do this either by using the arrow buttons or by clicking-and-dragging the variables over.
  • Click Plots and transfer your independent variable into the box labelled Horizontal Axis, then click Add.
  • Click Continue to be brought back to the original dialogue box.
  • Click Post Hoc.
  • In the corresponding dialogue box, transfer the independent variable into the box labelled Post Hoc Tests for, either by using the arrow button or clicking-and-dragging it across. 
  • Underneath these boxes, ensure that the tick-box for Tukey is checked in the section labelled Equal Variances Assumed.  Click Continue to be brought back to the first dialogue box again.
  • Click EM Means to bring up a new dialogue box, and transfer the independent variable from the Factor(s) and Factor Interactions box into the Display Means for: box. Tick the tick-box for Compare main effects and select 'Bonferroni' from the drop-down named Confidence interval adjustment.
  • Now, click Options, and in the corresponding dialogue box ensure the Descriptive statistics and Estimated of effect size tick-boxes are selected. Click Continue.
  • Click OK.

 

Your output will consist of four tables and a few plots:

  1. The first table is the 'Descriptive Statistics' table, containing the mean, standard deviation and sample size for each of your groups.
  2. The second table is the 'Multivariate Tests' table, which consists of the result of the MANOVA according to multiple different measures. You can ignore the top section 'Intercept' and only pay attention to the one below, named whatever your independent variable is named.
  3. The third table is the 'Tests of Between-Subjects Effects', which displays measurements according to the different variables used, and can be interpreted to see how the dependent variables differ according to the independent variable.
  4. The fourth table is named 'Multiple Comparisons' and is used to display the post-hoc results according to the post-hoc test decided. In the how-to guide above, we selected the Tukey test, so the results of this test is here.
  5. The plots are the 'Estimated Marginal Means', which has the mean response for each group.

What is a MANCOVA?

MANCOVA stands for Multivariate ANalysis Of COVAriance. A MANCOVA is a MANOVA test (one which contains multiple dependent variables) which also allow for the presence of covariate variables.

Like with MANOVA, MANCOVA tests indicate the differences in group means in the independent variable while considering the relationships between multiple dependent variables, this time while also taking into consideration the effect of a covariate.


When to Do a MANCOVA

MANCOVA tests have the following assumptions:

  • Dependent variables are interval or ratio (e.g. are continuous)
  • Covariate variables are continuous
  • Dependent variables have a linear relationship between each pair of dependent variables, within each group of the independent variable(s)
  • Dependent variables and covariates have a linear relationship between each pair of dependent and covariate, within each group of the independent variable(s)
  • Independence of observations
  • Homogeneity of regression slopes
  • Multivariate homogeneity of covariances
  • Multivariate normality
  • No significant outliers

 

Example

A nutritionist investigating the effect that the type of breakfast has on students' English, mathematics and science exam scores, whilst accommodating for the students' attendance, can perform a MANCOVA.

The independent variable is the breakfast type, the covariable is the student attendance, and the three dependent variables are the students' exam scores in English, mathematics and science. 


SPSS How-To

This guide is specifically for a one-way MANCOVA. A factorial MANCOVA, with more than one independent variable, will need to be adjusted accordingly.

In SPSS, have your data laid out in a way that your groups of your independent variable lie in one column, and the measurements of your dependent variables and covariate(s) exist in other columns. Label your columns as nominal/ordinal and scale respectively. Make sure that your data is paired, so that each participant's results exist in the same row.

When you are ready to perform the test:

  • Go to Analyse at the top, then choose General Linear Model and Multivariate
  • In the dialogue box, transfer your dependent variables into the Dependent Variables box, your independent variable into the Fixed Factor(s) box, and your covariate variables into the Covariate(s) box. Do this either by using the arrow buttons or by clicking-and-dragging the variables over.
  • Click Plots and transfer your independent variable into the box labelled Horizontal Axis, then click Add.
  • Click Continue to be brought back to the original dialogue box.
  • Click EM Means to bring up a new dialogue box.
  • Transfer over the independent variable from the Factor(s) and Factor Interactions box into the Display Means for: box. Tick the tick-box for Compare main effects and select 'Bonferroni' from the drop-down named Confidence interval adjustment.
  • Click Continue.
  • Click Options. In the new dialogue box, ensure the Descriptive statistics and Estimated of effect size tick-boxes are selected.
  • Click Continue.
  • Click OK.

 

The SPSS output will consist of a number of tables and a few plots:

  1. The first table is the 'Descriptive Statistics' table, containing the mean, standard deviation and sample size for each of your groups.
  2. The second table is 'Box's M Test for Homogeneity of Covariance Matrices', which should be checked to see if the homogeneity of variances assumption has been violated.
  3. The third table is the 'Multivariate Tests' table, which consists of the result of the MANCOVA model according to multiple different measures. You can ignore the top section 'Intercept' and only pay attention to the one below, named whatever your independent variable is named.
  4. The fourth table is the 'Tests of Homogeneity of Variances', which contains the results of Levene's test to ensure that the variances between groups are approximately equal.
  5. The fifth table is named 'Tests of Between-Subjects Effects', and displays different statistical information about the MANCOVA.
  6. The sixth table is named 'Post Hoc Tests' and is used to display the post-hoc results according to the post-hoc test decided, if one was selected.
  7. The plots are the 'Estimated Marginal Means', which has the mean response for each group.

What are Post-Hoc Tests?

ANOVA tests are able to identify if a difference between the means of the different categories exists or not, but not where this difference lies: in other words, ANOVAs cannot detect by themselves which categories of the independent variable(s) are statistically different to others. Post-hoc tests exist to investigate where the difference lies (if one has been detected!) while controlling for the family-wise error rate.

Therefore, if your ANOVA has indicated a significant difference between the group means, you will need to perform a post-hoc test to investigate which groups are statistically different to others. If your ANOVA has not produced a significant result, there is no need to perform a post-hoc test at all.

Post-hoc tests are good at controlling for the family-wise error rate, but in doing so the power of the comparisons decreases. This is because the trade-off for controlling the inflations of the chance of obtaining false-positive results involve reducing the power of the test.  A way to mitigate this is by reducing the number of comparisons being made, by choosing a post-hoc test which makes fewer group comparisons.


Different Types

There are many different types of post-hoc tests to choose from for your analysis. Listed below are only a few, and there is much debate over which tests are the most appropriate in certain situations, so it is recommended that analysists research which test is best for their data.

Name What Advantages Disadvantages Distribution Used
Bonferroni Procedure Best used for certain planned group comparisons, not all Does not assume test independence, and has no requirement of equal sample sizes between groups A low power test, which means Type II errors are more likely to occur None specifically
Dunn's Test A non-parametric post-hoc test which compares groups by comparing the difference in the sum of ranks, as opposed to group means. Does not require normality in your groups, and therefore is most appropriate to use after a Kruskal-Wallis test, as opposed to an ANOVA. Is appropriate for unequal sample sizes between groups Best used when working with a small subset of all possible pairwise comparisons, as opposed to all possible comparisons z distribution
Dunnett's Correction Used to compare every group mean to a control mean, and not with each other Makes no assumption that the variance between groups are equal Assumes that all groups are sampled from populations with the same standard deviation t distribution
Least Significant Difference (LSD) Test Compares differences between all pairwise groups by performing several t-tests Makes comparisons based on the pooled standard deviation from all groups, which makes it have a higher power than other post-hoc tests Assumes that all groups are sampled from populations which have the same standard deviation. Does not correct from multiple comparisons, so analysists will need to do this themselves t distribution
Scheffé's Test Analyses all possible linear contrasts, not just pairwise, i.e., can compare multiple groups at the same time, not just two at a time Flexible and trustworthy, due to its robustness. Can be used even when groups are of different sizes, and is less sensitive to deviations from normality or population variance Lower power than other tests F sampling distribution
Šídák Correction Makes pairwise comparisons, making sure all comparisons do not exceed the significance level Lower risk of Type II errors compared to the Bonferroni Procedure Assumes comparisons are independent t distribution
Tukey's Honest Significant Difference (HSD) Test Makes all possible pairwise comparisons Makes no assumption of equal sample sizes between groups Requires normally distributed groups, homogeneity of variance between groups and homoscedasticity Studentised distribution