| Feature | Extraneous Variable | Confounding Variable |
|---|---|---|
| Definition | Any variable other than the IV that could affect the DV. | An uncontrolled variable that did change systematically with the IV. |
| Impact | Adds 'noise' to the data but may not ruin the experiment if controlled. | Provides an alternative explanation for the results, ruining internal validity. |
| Example | A slight noise outside the room (if it affects everyone). | One group being tested in the morning and the other at night. |
Identify the Design First: Before discussing procedures, determine if the study is independent groups or repeated measures. If it is repeated measures, you must mention counterbalancing to address order effects.
Operationalisation is Key: When describing procedures, always specify exactly how the IV was changed and how the DV was measured (e.g., 'number of words recalled' rather than just 'memory').
Check for Standardisation: If an exam question asks how to improve a study's reliability, suggest standardising the instructions or the environment so the study can be replicated.
The Control-Validity Trade-off: Students often assume more control is always better. However, extremely high control can lead to low ecological validity, meaning the results might not apply to real-world settings.
Demand Characteristics: If a procedure is too obvious or artificial, participants may guess the aim of the study and change their behavior, which creates a bias that standardisation alone cannot fix.
Confusing Reliability with Validity: A procedure can be perfectly reliable (giving the same result every time) but completely invalid (measuring the wrong thing).