💪 Chapter 7.6: Evaluation – Did We Build the Right Thing?
Hey there, IGCSE stars! You’ve done all the hard work: figuring out what the client needs (Analysis), planning the system (Design), building it (Development), testing it, and putting it into action (Implementation).
Now comes the final, crucial stage: Evaluation. This is where you judge the success of your ICT solution. Think of it like a final school report for your new system—you check what it does well and where it needs improvement.
Why is Evaluation Important?
Even if the system works (it passed the tests!), it might not be perfect for the users or the business. Evaluation ensures the system meets the original goals and provides value.
📌 The Three Pillars of Solution Evaluation
When you evaluate an ICT solution, you focus on three key areas defined by the original requirements. Remember these three terms!
1. Efficiency of the Solution
Efficiency refers to how well the system performs its tasks, especially in terms of speed and resource usage. An efficient system gets the job done quickly without wasting time or computer power.
Think of it like this: If you write a database query that takes 30 seconds to run, that’s inefficient. An efficient system would run the same query in 3 seconds.
- Processing Speed: How fast does the system process data? (E.g., generating reports, updating records).
- Resource Usage: Does the system place too much strain on the network, CPU, or memory?
- Minimising Errors: An efficient system should minimise the chance of errors, saving time on correcting mistakes.
Key Takeaway: Efficiency is about speed and using resources smartly.
2. Ease of Use (Usability) of the Solution
Ease of Use (often called usability) judges how simple and intuitive the system is for the end-user. If the system is technically perfect but nobody can figure out how to use it, it’s a failure!
- Interface Design: Is the layout clear, consistent, and logical?
- Input Forms: Are data entry forms easy to navigate? Are instructions clear?
- Error Handling: Does the system provide helpful, non-jargon error messages?
- Training Required: If the system is very easy to use, less training is needed, saving the company money and time.
Analogy: Imagine setting up a complicated TV remote (hard to use) versus using a simple mobile app (easy to use). Usability is key to user adoption!
Key Takeaway: Ease of use is about the user experience—how quickly and accurately users can achieve their goals.
3. Appropriateness of the Solution
Appropriateness means checking if the system actually does what it was designed to do and solves the original business problem.
- Meets Requirements: Does the system fulfil all the inputs, outputs, and processing requirements identified during the Analysis stage?
- Hardware/Software: Is the selected hardware and software suitable for the job and the environment? (E.g., using robust industrial hardware in a factory setting).
- Scale: Can the system handle the volume of data and number of users required?
Example: If a company needed a system to manage large stock levels globally, but you delivered a simple spreadsheet, the solution is not appropriate, even if the spreadsheet is easy to use.
Key Takeaway: Appropriateness means the solution is the right tool for the job.
✎ Quick Review Check
To remember the three main criteria, think of the acronym E-E-A:
Efficiency, Ease of Use, Appropriateness.
📜 Step 1: Comparing with Original Requirements
The most important part of evaluation is comparing the final system against the original task requirements set out in the Analysis stage.
You need to go through the System Specification checklist and mark off which requirements have been met and which haven't. This is a very structured comparison.
For example:
- Requirement: "The system must allow input of customer details including date of birth."
- Evaluation Result: Yes, it meets this, and the date format validation works correctly.
- Requirement: "Output reports must be generated in under five seconds."
- Evaluation Result: No, the complex monthly report takes 15 seconds. (This reveals an inefficiency/limitation!)
🤔 Step 2: Identifying Limitations and Improvements
No system is perfect, especially when first built. After comparing the system to the requirements, you must formally record its shortcomings and suggest ways to fix them.
Limitations
Limitations are the features, functions, or performance aspects where the system fails to meet the required standard.
- Example 1 (Usability): The menu structure is too deep, requiring six clicks to access the payroll feature.
- Example 2 (Efficiency): The database response time slows down significantly when more than 50 users are logged in simultaneously.
- Example 3 (Appropriateness): The system was designed for text input, but the user now needs to upload images, a feature that was missed in the initial analysis.
Necessary Improvements
Based on the limitations, you must provide necessary improvements. These are specific recommendations on how to upgrade or change the system to overcome the limitations and better meet the user needs.
- For Example 1: Suggest creating a quick-access shortcut for the payroll feature or redesigning the main menu.
- For Example 2: Recommend upgrading the server hardware (CPU/RAM) or optimising the database query structure.
- For Example 3: Add a file upload module and ensure the database can store the file path correctly.
Remember: Improvements should be realistic and directly address the recorded limitations.
👥 Step 3: Evaluating Users' Responses to Testing
The users are the people who will work with the system every day. Their opinion is essential! This part of the evaluation involves collecting and assessing feedback from the people who tested or piloted the new system.
Methods for Gathering User Feedback
- Questionnaires: Using surveys to gather structured feedback (e.g., rating the interface simplicity from 1 to 5).
- Interviews: Having face-to-face discussions to understand complex issues or get detailed suggestions.
- Observation: Watching users interact with the system to see where they struggle or make mistakes (revealing usability issues).
What to Evaluate in User Responses
You are looking for feedback on the practical aspects of the system:
- Training Needs: Did the users find the training sufficient? Did they need a lot of help? (Relates to Ease of Use).
- Perceived Speed: Did the users feel the system was fast enough during their daily tasks? (Relates to Efficiency).
- Accuracy/Reliability: Did they encounter frequent bugs or data entry difficulties?
- Suggestions: What features did users wish they had, or what could make their job easier? (These lead directly to Improvements).
❌ Common Mistake to Avoid
Students often confuse Testing and Evaluation.
Testing proves the system works (i.e., 'The save button saves the file').
Evaluation judges if the system is good enough (i.e., 'The save process is too slow' or 'The save button is in a difficult place').
Evaluation uses the results of testing (and user feedback) to make a judgement.
📖 Summary and Key Takeaways
Evaluation is the conclusive phase of the Systems Life Cycle. It is a critical audit to ensure the system is valuable to the client.
A complete evaluation report must cover these five points:
- Judgement on Efficiency (speed and resource use).
- Judgement on Ease of Use (usability).
- Judgement on Appropriateness (solving the original problem).
- A detailed comparison showing how the system meets (or fails to meet) the original requirements.
- A list of Limitations and corresponding Necessary Improvements based on technical findings and user feedback.