Independent Samples: This design involves comparing groups that are entirely separate from one another, such as a control group versus an experimental group. The choice of test (e.g., Independent t-test) assumes no relationship between the individuals in the groups.
Related (Paired) Samples: This occurs when the same subjects are measured twice (e.g., pre-test and post-test) or when subjects are matched based on specific characteristics. Tests like the Paired t-test or Wilcoxon Signed-Rank test account for the correlation between these measurements.
Number of Groups: The choice between a t-test and an ANOVA depends on whether you are comparing exactly two groups or more than two groups. ANOVA is used for three or more groups to prevent the inflation of the family-wise error rate.
| Feature | Parametric Tests | Non-Parametric Tests |
|---|---|---|
| Data Type | Interval or Ratio | Nominal or Ordinal |
| Distribution | Assumes Normality | Distribution-free |
| Variance | Assumes Homogeneity | No variance assumption |
| Central Tendency | Compares Means | Compares Medians/Ranks |
| Power | Higher (more sensitive) | Lower (less sensitive) |
Statistical Power: Parametric tests are generally more powerful, meaning they are more likely to detect a significant effect if one truly exists. However, using them when assumptions are violated can lead to misleading results.
Robustness: Non-parametric tests are more robust against outliers and extreme scores because they use the rank-order of data rather than the raw values.
Identify the Variable Type: Always start by determining if the dependent variable is categorical (nominal/ordinal) or continuous (interval/ratio). This immediately narrows the field of possible tests.
Check for 'Relatedness': Look for keywords like 'repeated measures', 'matched pairs', or 'before and after' to identify related samples. If these are absent, assume independent samples.
Verify Assumptions: If the problem mentions 'skewed data' or 'small sample size', lean toward non-parametric tests. If it mentions 'normally distributed' or 'equal variances', use parametric tests.
Count the Groups: Distinguish between comparing two groups (t-tests) and comparing three or more groups (ANOVA or Kruskal-Wallis).
Ignoring the Level of Measurement: A common error is performing a t-test on ordinal data (like a 1-5 ranking). While common in some fields, it technically violates the assumption that the intervals between ranks are equal.
Misinterpreting the Central Limit Theorem: Students often assume that any large sample is 'normal'. While the sampling distribution of the mean becomes normal, the underlying data distribution may still be skewed, which might still necessitate non-parametric methods for certain analyses.
Multiple T-tests: Running multiple t-tests to compare three groups instead of using one ANOVA increases the probability of a Type I Error (false positive).