A t-test is a type of comparison test used in hypothesis testing to compare the means of two groups. Therefore, in such a test, your independent variable is the variable which consists of these groups. These can be:
If you have more than two groups, you cannot use a t-test! Instead, you should use an ANOVA.
t-tests are parametric tests, which means they are used for data which is normally distributed, homogeneous and independent. Make sure you check these assumptions before attempting to perform a t-test, as if you have non-parametric data, you will need to use a non-parametric test!
Note that t-tests are most effective when you have greater than ten participants in each group: however, to retain statistical power, you should consider calculating the appropriate sample size for your study.
The tabs of this guide will support you in understanding t-tests. The sections are organised as follows:
t-tests are parametric tests, and although definitions of parametricity vary across sources you may come across, in general what this means is that your data should be:
You must check these assumptions before attempting to perform a t-test (or during, depending on the software you use). This is because, if any of these assumptions fail, you cannot continue with a t-test and must use a non-parametric equivalent.
Continuous data can be plotted in a histogram to display the shape the distribution takes. When this distribution is shown to be 'normal' we say that the data is 'normally distributed'.
A Q-Q plot can also be used to check the distribution of your data.
Alternatively, instead of visually inspecting your data's distribution using a graph, you can use a test:
Note that categorical data can never be normally distributed! This is because it is neither interval nor ratio data, and therefore does not make sense to check the distribution. Normality should be checked on your continuous data, e.g. measurements, discrete counts, etc.
If your data does not take the shape of the normal distribution, you can do either of two things:
Data which is homogeneous means that the groups contain roughly constant variance. You can test for homogeneity using:
If your data fails the homogeneity assumption, you need to use a non-parametric test equivalent to the one you wished to perform, otherwise your results will become untrustworthy.
Having independent data means that your data does not influence each other, so it is understandable that this should not happen in a hypothesis test! Independent data has no relationship between observations. This is controlled via your study design, and you can check for independence using:
A Paired t-test is used to compare the results of an intervention/event/etc. after a period of time has passed. In other words, you use a Paired t-test to compare the same group of participants which have been measured at two different time points.
When you have a single group of:
and you are observing the effect of an:
then you can use a Paired t-test.
Your independent variable needs to be your intervention/event/etc. This means that your two groups can be 'before' and 'after', for example.
Your dependent variable needs to be the thing you are measuring, and therefore needs to be continuous data, for example, interval or ratio data. If you have ordinal data, you should consider using a Wilcoxon Signed Rank Test instead.
Your data needs to be parametric (normally distributed, homogeneous and independent), with no significant outliers in the differences between the groups.
In SPSS, lay out your data so that your two groups are two variables. Make sure that your data is paired, so that each participant's results are in the same row.
When you are ready to perform the test:
Your output will consist of two tables: the 'Paired Samples Statistics', which contains some descriptive statistics on your data, and the 'Paired Samples Test', which contains the results of your test.
An Independent (or Unpaired) t-test is used when comparing two population means. A study appropriate for the use of an Independent t-test would involve two separate groups of people, where each participant is involved in one group only (and not both, nor neither!).
As an example, an Independent t-test would be used to compare the percentage of students scoring 6 or more in GCSE Physics of schools in Yorkshire and Leicester: the measurements are the percentage of students, the participants are the schools, and the two groups are Yorkshire and Leicester.
You can perform an Independent Samples t-test when you have two distinct groups of
and you are observing the difference between them. This means that your independent variable needs to be your two separate groups.
Your dependent variable, like in other t-tests, needs to be the thing you are measuring, and therefore needs to be continuous data, for example, interval or ratio data. If you have ordinal data, you should consider using a Mann-Whitney U Test instead.
Your data needs to be parametric (normally distributed, homogeneous and independent), with no significant outliers in the differences between the groups.
In SPSS, lay out your data so that your independent variable (your groups) are one variable, and your dependent variable (the thing you are measuring) is another variable.
When you are ready to perform the test:
Your output will consist of two tables: 'Group Statistics', which contains descriptive statistics about each group you have, and 'Independent Samples Test', which contains the output of the test itself.
The results for the Independent t-test as well as the Welch's t-test are laid out on this table, so you need to read the top line of this table only.
The t-tests discussed so far have been Student's t-tests, which assumes equal standard deviation between groups. Welch's t-test is an equivalent to the Independent Samples t-test, which does not make this assumption.
The Welch's t-test is argued to be the best for Independent Samples rather than Student's Independent Samples t-test, due to its lack of reliance of this assumption, which more likely matches the situation of real life data. Indeed, the Welch's t-test is the default Independent t-test in R.
When you wish to use an Independent t-test but your sample sizes and variances are unequal between your groups, use a Welch's t-test instead.
In other words, use a Welch's t-test when your Levene's test comes back significant.
SPSS will compute a Welch's t-test at the same time as the Independent t-test, so the only difference is in which line of the 'Independent Samples Test' table you read: if you are assuming equal variance, you are reading the top line and you can completely ignore the bottom line. Otherwise, if you are not assuming equal variance (because the Levene's test is significant), you will need to be using the Welch's t-test and therefore read the bottom row.
A One-Sample t-test is another example of a non-Student t-test, used to compare the mean of a population to one specific value. This value may have come from prior research, or a hypothesised value.
The groups used in this test therefore are:
You can use a One-Sample t-test when you only have the measurements of one group, and you wish to compare this group's mean to an established (or hypothesised) mean value.
In SPSS, lay out your data so that your group measurements fall under one variable.
When you are ready to perform the test:
Your output will consist of two tables: the 'One Sample Statistics', which contains some descriptive statistics on your group data, and the 'One Sample Test', which contains the results of your test.