Efficiency vs Ease of Use: Efficiency focuses on performance metrics such as execution time or cost, while ease of use centres on how comfortably users interact with the system; both are vital but measure different aspects of success.
Appropriateness vs Requirement Matching: Appropriateness evaluates suitability within context (e.g., suitability for different departments), whereas requirement matching checks whether functional objectives were fully fulfilled.
User Feedback vs Test Results: User feedback captures subjective experiences and qualitative insights, while test results provide objective performance outcomes; effective evaluation combines both data sources.
Link evaluation points to the system’s goals because exam questions often reward explicit connections to requirements rather than generic observations. Always relate your answer back to performance, usability, or appropriateness.
Provide both strengths and limitations to show balanced critical analysis. Examiners look for evaluations that recognise achievements while acknowledging realistic constraints.
Use user feedback and test results in responses because referencing both objective and subjective indicators demonstrates thorough understanding of what evaluation involves.
Confusing efficiency with accuracy leads students to discuss correctness instead of resource performance; accuracy belongs to testing, whereas evaluation examines long‑term operational performance.
Ignoring user perspectives results in incomplete evaluations because a system may be technically sound yet difficult to use. User feedback is essential for full assessment.
Assuming requirements are automatically met without evidence leads to vague or unsupported claims; explicit comparison against each requirement prevents this mistake.
Evaluation links back to analysis because the original user requirements serve as the benchmark for judging success. This circular relationship ensures systems are developed and assessed coherently.
Evaluation informs future system versions by identifying change requests and improvement opportunities. This transforms evaluation into the first step of the next development cycle.
Evaluation supports organisational decision‑making by determining whether additional training, hardware upgrades, or workflow changes are necessary to maximise system value.