February 2026 update

Student looking at a tablet computer with an AI robot checking their work.

We’ve released a set of updates designed to give teachers more flexibility, improve assessment accuracy, and help students learn more effectively. This update includes a brand‑new marking mode for Tasks, improvements to how returned tasks behave, and the first stage of our upgraded AI marking system.

New Task marking mode

Tasks were always intended for assessment. They were supposed to mimic the test/exam scenario where students could look at all the questions, tackle them in any order, go back and change their mind before finally submitting their responses for marking.

Over time, due to the sandbox nature of Smart Revise it became apparent that there were so many more ways you could use tasks. Not just for summative assessment, but also in formative ways too. One typical approach is to set a task containing a number of questions but tackle them one by one with the student self-assessing their answer before moving onto the next question.

Tasks now support this with a new marking mode: Self assessment (after question). Teachers can select this when setting a new task.

Option button for self assessement (after question).

Changes to returning a task to the student

When you returned a task to a student they would lose all their answers and marking data, essentially requiring them to retake the task. This has been changed so that their answers, marking and feedback is retained. This allows a student to reflect and improve on their answers before resubmitting the task.

If you want one or more students to retake the task from scratch you should copy the task instead. Doing this will create two entries in your mark book, one for each task. If you didn’t want to see both, use the tagging feature to give each task a tag. You can then use these tags to filter what tasks you see in your mark book.

Improvements to AI marking

We recently commissioned some research into how we can make AI marking more accurate. This update includes stage 1 of a 3-part plan.

We have upgraded to the latest Open AI model that is optimised for structured, rules-based tasks such as extracting key points from answers, checking correctness step-by-step and assigning more consistent marks. We have also significantly increased the prompt to inform it of marking principles and how to justify the marks awarded. The marking should begin to be more accurate because the AI is now instructed to:

  • Ignore minor spelling errors.
  • Accept all valid answers that demonstrate understanding, even if they are not strictly industry standard approaches to compensate for the level of study.
  • Adopt a more structured approach distinguishing between completely incorrect and partially correct answers.
  • Use a structured output schema so the feedback can be parsed more accurately.

This is just the start. We anticipate even greater improvements when stages 2 and 3 are implemented. Please remember this is an improvement but it is not, and probably never will be perfect.

What’s coming next?

We are still continuing to analyse the live data from goals. A second iteration of the feature will remove the "days" requirement from the Terms goal so it behaves in the same way as Quiz and Advance. Although linking the Terms goal to the Leitner system felt like a good idea when we analysed the requirement initially, feedback has told us maybe not!

Part of the work on improving goals has also included investigating linking them more strongly to the flight path. For example, if you want to change the goals maybe you could have a setting to follow the minimum expectation, target or aspirational trajectory on the flight path. So instead of setting a minimum or maximum number of questions, you identify your intention for the whole class or individual students instead. We are still working on that possibility at the moment.