Module testing involves examining each functional component independently to confirm that its internal logic operates correctly. This technique isolates faults so they do not propagate into larger system interactions.
Integration testing evaluates how modules operate when combined, ensuring data flows smoothly from one part of the system to another. This step detects interface errors that do not appear during isolated testing.
System testing examines the complete system in an environment simulating real operational conditions. It validates that full workflows, user interactions, and data outputs behave correctly.
Use of test plans provides a structured method to test systematically, specifying test data, expected results, actual results, and actions taken. This increases consistency and allows traceability of fixes.
| Type | Description | Purpose |
|---|---|---|
| Normal | Works within expected ranges | Confirms routine behaviour |
| Extreme | Sits at input limits | Ensures boundary correctness |
| Abnormal | Falls outside allowed ranges | Confirms error handling |
| Live | Real operational data | Confirms real‑world suitability |
Normal vs. extreme data differ in that normal data represents typical cases, whereas extreme data tests acceptable boundaries. This distinction ensures the system handles both routine and edge conditions.
Abnormal vs. live data differ because abnormal data intentionally breaks the rules to test rejection, whereas live data confirms reliability using authentic operational records.
Always identify the purpose of each test data type, since questions often ask when each should be applied. Examiners typically check whether students understand the difference between valid, boundary, and invalid inputs.
Use precise terminology, such as module testing, integration testing, normal data, and abnormal data, because vague descriptions lose marks. Examiners reward clarity and correct use of technical vocabulary.
Relate expected outcomes to system rules, especially when constructing test tables. Students should explain why each input leads to an expected output rather than simply listing values.
Justify test strategies by linking them to risk reduction, reliability, or requirement verification. Strong exam answers show understanding of why a method is chosen, not just what it is.
Confusing extreme and abnormal data is a common mistake, as extreme data is still valid but at the limits, while abnormal data must be rejected. Mislabeling these types often leads to incorrect test plans.
Assuming module testing ensures whole-system correctness overlooks integration issues. Even if modules work individually, interactions may still fail, so full system testing remains essential.
Failing to record expected outcomes reduces the usefulness of a test plan because results cannot be compared meaningfully. Clear expected results ensure that testers can evaluate correctness reliably.
Believing live data is always safe to use is incorrect; live data can introduce privacy or compatibility concerns. It must be handled carefully to avoid accidental data leakage.
Testing relates to system design because strong design decisions, like modularity, simplify testability. Good structure reduces complexity in both writing and executing tests.
Testing supports implementation by ensuring the final system is robust before deployment. Implementation failures often trace back to insufficient testing rather than design flaws.
Test documentation connects to evaluation, as recorded test results provide measurable performance evidence. This helps stakeholders assess appropriateness and reliability.
Testing extends into maintenance, since updates or patches require re‑testing to ensure no new issues have been introduced. Testing therefore continues beyond initial deployment.