Relative Risk is a measure used to compare the risk of an event occurring in one group (Group A) relative to another group (Group B).
Unlike absolute risk, relative risk is a ratio of two probabilities and is not restricted to the range ; it can be any non-negative value.
A relative risk of suggests the risk is identical in both groups, while a value greater than indicates a higher risk in the primary group being studied.
Formula:
Bias in a statistical context refers to a systematic error that results in an unfair or unrepresentative estimate of a population parameter.
An experiment or a tool (like a die or a coin) is considered biased if certain outcomes are systematically favored over others, deviating from theoretical probability.
Bias often arises from sampling errors, where the group studied does not accurately reflect the diversity or characteristics of the larger population.
Identifying bias requires comparing the experimental probability (relative frequency) against the theoretical probability expected in a fair scenario.
| Feature | Absolute Risk | Relative Risk |
|---|---|---|
| Definition | Probability of an event | Ratio of two probabilities |
| Range | to | to |
| Units | Probability/Percentage | Dimensionless Ratio |
| Purpose | Measures individual likelihood | Measures comparative likelihood |
Check the Denominator: When calculating absolute risk, ensure the denominator is the total number of trials, not just the number of successful outcomes.
Direction of Comparison: In relative risk, always identify which group is the 'numerator' (the group being compared) and which is the 'base' (the reference group).
Interpreting 'Times More Likely': If a relative risk is , the correct interpretation is that the event is times as likely to happen in Group A as in Group B.
Sample Size Awareness: Small sample sizes can lead to high 'experimental bias' where the results look unfair simply due to random chance rather than a systematic flaw.