Core idea: A Poisson random variable can be approximated by a normal variable when is sufficiently large. This replaces many separate discrete probability terms with one continuous probability calculation. It is most useful when direct Poisson computation is slow or when inverse normal tools are needed.
Parameter mapping: For a Poisson model, the mean and variance are both , so the approximating normal uses and with . This mapping preserves the central location and spread of the original count process. It is the mathematical reason the approximation stays close for large means.
Continuity correction: Because Poisson outcomes are integers but normal outcomes are continuous, event boundaries must be shifted by . Without this shift, the approximated region does not match the integer counts being requested. The correction aligns each integer with its rounding interval on the continuous scale.
Model to memorize: (for large ).
Key conversions:
{"alt":"Discrete Poisson bars overlaid with a normal curve and a shaded continuity-corrected interval from k minus 0.5 to k plus 0.5.","svg":"<svg viewBox="0 0 600 400" xmlns="http://www.w3.org/2000/svg\">
| Feature | Exact Poisson | Normal Approximation |
|---|---|---|
| Data type | Discrete counts | Continuous model |
| Accuracy | Exact | Approximate |
| Speed for ranges | Can be slower | Usually faster |
| Inverse use | Often limited | Widely available |
With correction vs without correction: A normal approximation without continuity correction often misaligns boundary inclusion and can bias probabilities. Adding maps integer events to the correct continuous region. This distinction is especially important for one-sided inequalities and single-value events.
Poisson-to-normal vs binomial-to-normal: Both are discrete-to-continuous approximations that require continuity correction, but the condition signals differ. For Poisson, focus on whether is large; for binomial, suitability depends on shape conditions tied to and . Choosing the wrong approximation can produce systematic error even when arithmetic is correct.
Start with model declaration: State the original and approximating distributions explicitly, including parameters and what the variable counts. This prevents hidden parameter errors and clarifies method choice. Examiners reward clear setup because it shows conceptual control, not just calculator use.
Convert inequality before standardizing: Apply continuity correction to the raw count boundary first, then convert to a -score. If you standardize first and then adjust, the transformed boundary is wrong. This order is a frequent discriminator between full and partial credit.
Use a reasonableness check: Verify that probabilities lie in , tails decrease for distant thresholds, and complementary events sum correctly. Quick checks like can reveal sign or bound mistakes. In exam conditions, this catches many avoidable slips in under a minute.
Misconception: continuity correction is optional: Some learners skip correction because calculators can evaluate normal probabilities directly. The issue is not computation but event mismatch between discrete and continuous models. Skipping correction can shift answers enough to lose accuracy marks.
Pitfall: wrong inequality direction after correction: Students often add when they should subtract, or vice versa. The reliable rule is to decide whether the endpoint integer is included and shift accordingly. Thinking in terms of the nearest included integer boundary avoids memorization errors.
Pitfall: treating approximation as exact equality: The normal result should be interpreted as close, not identical, to the Poisson probability. Precision expectations depend on how large is and whether the event lies in the center or tail. Strong answers acknowledge approximation status when reporting conclusions.