Designing Success Criteria
students, imagine building an app, website, or program and then hearing a user say, “It works, but how do I know if it is actually successful?” 🤔 That is exactly why designing success criteria matters in IB Computer Science HL. Success criteria are the measurable signs that show whether a solution meets the needs of the client and solves the problem effectively.
In the Computational Solution topic, success criteria connect the whole development process: they guide design, help during testing, and support evaluation at the end. In this lesson, you will learn how to define strong success criteria, why they must be measurable, and how they fit into the Internal Assessment process.
What success criteria are and why they matter
Success criteria are specific statements that describe what a finished solution must do to be considered successful. They are created from the client’s requirements and the problem definition. A good criterion is not vague. It should be clear enough that someone can test it and decide whether it has been met.
For example, if a school client wants a library booking system, a weak criterion might be: “The system should be easy to use.” That sounds nice, but it is hard to measure. A stronger version would be: “A new user can complete a book reservation in fewer than three steps.” This is much better because it can be tested directly.
Success criteria are important because they turn general needs into practical goals. They help the developer stay focused and avoid building extra features that do not solve the real problem. They also create a fair basis for evaluation later on. If a criterion says the system must generate a report in under $2$ seconds, then the developer can measure performance and compare the result to that target.
In IB Computer Science HL, this matters especially for the Internal Assessment. The solution is not judged only by whether it runs. It is judged by whether it meets the needs of the client and whether there is evidence for that claim. Success criteria provide that evidence framework.
How to write strong success criteria
students, strong success criteria usually have three qualities: they are specific, measurable, and relevant.
Specific
A criterion must clearly state what feature or behavior is expected. If a client asks for a school attendance tracker, “The system should manage attendance” is too broad. A specific version could be: “The system records each student as present, absent, or late for each lesson.”
Measurable
You must be able to test the criterion using observation, data, or a user test. Measurable does not always mean numbers only, but numbers help a lot. For instance, “The system loads the dashboard in under $3$ seconds on a school laptop” can be checked easily.
Relevant
The criterion should relate directly to the client’s real needs. If the client only needs a booking system, a built-in chat feature may not be relevant. A good criterion always connects to the problem being solved.
A useful way to write criteria is to start with phrases like:
- “The system will…”
- “The user can…”
- “The solution must…”
- “The output will…”
These phrases help keep the criterion focused on an observable result.
Here is an example set for a revision planner app:
- “The user can create, edit, and delete study tasks.”
- “The app sends a reminder at a time chosen by the user.”
- “The weekly overview displays all tasks with their due dates.”
- “A task can be completed and moved to a completed list.”
These are much stronger than “The app should be useful” because they can be tested one by one.
Linking success criteria to client needs and design
Success criteria do not come from nowhere. They are built from the client brief, interviews, observations, and the problem analysis stage. In the IB process, the developer first identifies the problem, then gathers user requirements, and then turns those requirements into success criteria.
This is important because the criteria should reflect the client’s actual priorities. For example, a sports coach may care most about fast data entry and accurate statistics. A teacher may care about accessibility and simple navigation. A good design does not guess; it responds to evidence from the client.
Let’s say the client wants a system for tracking club members. From the interview, the developer learns that the client needs:
- fast signup of new members
- automatic age verification
- monthly attendance reports
- protection of personal data
These needs can become success criteria such as:
- “The system allows a new member to be added in fewer than $30$ seconds.”
- “The system checks that the member’s age is at least the required minimum before saving.”
- “The system generates a monthly attendance report in PDF format.”
- “Access to personal data is restricted to users with administrator privileges.”
Notice how each one is linked to a real requirement. That connection is what makes the criteria useful in both development and evaluation.
Common types of success criteria in IB Computer Science HL
Success criteria can focus on different parts of a solution. In Computational Solution, it is helpful to think about several categories.
Functional criteria
These describe what the system does. For example:
- “The system stores usernames and passwords securely.”
- “The program sorts student results from highest to lowest.”
- “The app calculates the total cost of a shopping basket.”
Usability criteria
These describe how easy the solution is to use. For example:
- “A first-time user can complete the main task without external help.”
- “Buttons are clearly labeled and visible on a phone screen.”
Performance criteria
These describe speed or efficiency. For example:
- “Search results appear in under $2$ seconds for a database of $1,000$ records.”
- “The system saves a new record without noticeable delay.”
Data and security criteria
These describe accuracy, privacy, or safe handling of information. For example:
- “The system validates all email addresses before submission.”
- “Only authorized users can edit confidential records.”
Output criteria
These describe the form or quality of the final output. For example:
- “The report includes totals, averages, and a chart.”
- “The invoice is printable and formatted on one page.”
Using different types helps ensure the solution is balanced. A system that is fast but confusing is not truly successful. A system that looks good but gives wrong data is also not successful.
Testing success criteria during development
Success criteria are not only written for the final report. They also guide testing during development. Each criterion should be matched with a test method and evidence.
For example:
- Criterion: “The user can reset their password using a valid email address.”
- Test: Enter a registered email address and check that a reset link is sent.
- Evidence: Screenshot of the successful reset message.
- Criterion: “The dashboard opens in under $3$ seconds.”
- Test: Measure the loading time on the target device.
- Evidence: Timing results from several trials.
- Criterion: “The system rejects invalid dates.”
- Test: Enter $31/02/2026$ and observe the validation message.
- Evidence: Screenshot of the error message.
Testing works best when each criterion can be checked directly. If a criterion is vague, testing becomes difficult. That is why strong wording matters from the start.
When writing your IA, you should make sure each success criterion can be linked to a test table or testing section. This shows the examiner that your design choices are evidence-based, not random.
Evaluating the solution using success criteria
At the end of development, success criteria become the basis for evaluation. This means you compare the finished solution to each criterion and decide whether it was fully met, partly met, or not met.
For example, suppose the criterion is: “The system generates a weekly attendance summary.” If the final product does generate the summary correctly, then that criterion is met. If it generates the summary but only for one class instead of all classes, then it may be partly met. The evaluation should explain this clearly and use evidence such as screenshots, output files, or user feedback.
A strong evaluation does more than say “yes” or “no.” It explains the result and, if needed, suggests improvements. For example: “The system met the criterion for report generation, but the report only exports as CSV. To better satisfy the client, a PDF export option should be added.”
This is a key part of IB Computer Science HL because it shows reflection and problem-solving. The student must prove that the solution was judged against the original requirements, not just by personal opinion.
Conclusion
Designing success criteria is a central skill in Computational Solution, students. It helps transform client needs into clear targets for development, testing, and evaluation. Good criteria are specific, measurable, and relevant. They should be based on the client’s real needs and should be written in a way that allows direct testing.
In the Internal Assessment, success criteria give structure to your project. They connect the problem analysis to the final evaluation and show whether the solution truly works for the client. When you write them well, you make the whole development process more focused, fair, and effective ✅
Study Notes
- Success criteria are clear, testable statements that describe what a successful solution must achieve.
- They are created from client requirements and problem analysis.
- Strong criteria are specific, measurable, and relevant.
- Weak criteria are vague, such as “easy to use” or “good design.”
- Better criteria include observable features, numbers, or testable outcomes.
- Success criteria can cover function, usability, performance, security, and output quality.
- Each criterion should be linked to testing evidence during development.
- In evaluation, the final solution is compared against each criterion.
- Success criteria are essential in the IB Computer Science HL Internal Assessment because they show whether the solution meets the client’s needs.
- Good criteria help the developer stay focused, test effectively, and evaluate fairly.
