Random sampling is used to select participants so that all individuals in a population have an equal chance of selection. This technique minimizes bias and produces a sample closer to the overall population distribution.
Use of control groups allows researchers to compare outcomes between exposed and unexposed groups. This ensures observed differences can be attributed to the risk factor rather than unrelated variables.
Standardisation of procedures ensures that each participant experiences the same conditions except for the independent variable. This technique ensures differences in outcomes are not due to inconsistent procedures.
Repetition within studies involves collecting multiple data points from each condition to increase reliability. Replicating entire studies extends this reliability across research groups and contexts.
Random vs biased sampling: Random sampling gives every individual an equal chance of selection, whereas biased sampling systematically includes certain demographics; knowing the difference helps evaluate whether findings generalize to a wider population.
Control variables vs controlled conditions: Control variables are factors actively kept constant, while controlled conditions refer to the overall environment kept consistent; distinguishing these helps ensure both environmental and physiological influences are addressed.
Reliability vs validity: Reliability concerns whether results are repeatable, while validity concerns whether the study measures what it intends to measure; recognising this distinction is crucial for interpreting study strength.
Internal vs external validity: Internal validity refers to whether results are trustworthy within the study, while external validity relates to how far results can be generalised; separating these concepts clarifies how conclusions should be applied.
Identify sample size and representativeness, as exam questions often test whether students recognise when a sample is too small or biased. Always check whether the sample reflects the intended population.
Look for controlled variables, since exams frequently include studies with missing or poorly controlled factors. Highlight any uncontrolled variable that may affect the dependent variable.
Avoid assuming causation, because exam questions regularly test misunderstanding of correlation. Use cautious language such as ‘associated with’ or ‘correlated with’, not ‘caused by’.
Check for replication or repeats, as reliability questions often require students to note whether the study was repeated enough times to confirm consistency.
Confusing correlation with causation leads students to incorrectly claim that a risk factor directly causes an outcome. This misconception ignores possible confounders or coincidental associations.
Assuming small samples can represent populations is a frequent error because small samples are easily skewed. Larger samples reduce random variability and produce more stable estimates.
Ignoring confounding variables results in overconfidence in the main relationship being studied. Without controlling for these variables, the study cannot isolate the effect of the risk factor.
Believing repetition and reproducibility are the same is another misconception; repetition means repeating measures within one study, while reproducibility requires independent researchers achieving similar results.
Links to epidemiology highlight that risk factor studies underpin public health recommendations by uncovering associations between behaviours and disease prevalence.
Connections to statistics are essential because hypotheses, significance tests, and confidence intervals determine whether apparent differences reflect true effects.
Ethical considerations arise when studying harmful risk factors, requiring careful design to protect participants from unnecessary exposure.
Applications in evidence-based medicine rely on high-quality study designs to evaluate factors such as diet, lifestyle, or environmental exposures in guiding clinical advice.