Determinism and reproducibility: Good algorithmic steps are specific enough that execution does not depend on personal intuition. This is why the same input should lead to the same output when the rules are followed correctly. Reproducibility is the foundation of automation and formal marking in exams.
Branching logic: Many algorithms rely on conditional tests such as comparisons to choose different paths. A condition partitions the process into mutually exclusive outcomes, which controls both correctness and efficiency. Designing the condition clearly prevents ambiguous execution.
Efficiency-accuracy trade-off: Some algorithms are exact, while others are heuristic and prioritize speed over guaranteed optimality. A common decision lens is to compare utility rather than chasing perfection in every context.
Decision idea: choose a method that maximizes , where is answer quality, is computational cost, and reflect priorities.
Procedure vs representation: The algorithm is the logical procedure; text and flowcharts are two representations of the same logic. Confusing representation with method can cause students to think different diagrams imply different algorithms. In reality, correctness depends on control flow and variable updates, not drawing style.
Exact vs heuristic algorithms: Exact methods guarantee an optimal or fully correct result under their assumptions, while heuristic methods target good-enough results faster. Heuristics are justified when time, scale, or uncertainty makes exact search impractical. Method choice should match stakes and resource limits.
Comparison table: The table clarifies when each framing is preferable and what risks to monitor.
| Distinction | Option A | Option B |
|---|---|---|
| Solution goal | Exact correctness | Satisficing quality |
| Typical cost | Higher time or computation | Lower time, lower guarantee |
| Best context | Small or high-stakes tasks | Large-scale or time-critical tasks |
| Main risk | Slow execution | Suboptimal answer |
Use this comparison before solving to justify algorithm selection explicitly.
Run in robot mode: Treat every instruction as mandatory, even if the final result seems obvious by inspection. Examiners award method marks for process fidelity, not intuition alone. This strategy is especially important in multi-step loops and branching tasks.
Track state systematically: Use a clean table for variables that change and leave unchanged entries intentionally blank or repeated as needed. This reveals whether your updates follow the algorithm's control flow and prevents accidental carry-over errors. Clear state tracking also makes self-checking faster at the end.
Always close the algorithm: State explicitly why execution stops, then present the output line as requested. This proves both termination and interpretation of results, which are separate skills in assessment. A complete ending statement often protects marks even if one intermediate value is wrong.
Mistaking intermediate values for final output: Students often stop at the last computed number without checking whether an explicit output instruction exists. This fails because many algorithms separate computation from reporting. Always distinguish the working state from the declared answer.
Branch inversion errors: A frequent mistake is following the wrong branch after a decision test, especially when conditions are phrased with inequalities. One branch error propagates through all later steps and can invalidate the whole trace. Prevent this by marking each decision outcome before moving forward.
Premature stopping: Seeing a plausible pattern can tempt early termination before the formal completion condition is met. Algorithms are validated by their stopping rule, not by visual guesswork. Continue until the defined criterion confirms completion.