Logo

Redefining revision as continual practice

Huge improvement to AI marking

The large language model has been upgraded to improve the range and accuracy of AI marking.

Although it costs 10x more to operate, we have upgraded the model used by AI marking for Tasks. This means that the marking is considerably more accurate and in addition more questions can now be marked by AI, including high mark questions requiring lines of reasoning. AI marking now also has the context of case studies where they are used by questions. Maybe even better news is that we are not passing on the cost of this service to users.

No personal data is ever passed to the external OpenAI large language model (LLM). We include a token with each answer provided by the student before it is sent to the LLM. When the response comes back this token is matched to the original question/answer making the whole process anonymous.

You may be surprised to learn that Smart Revise does not pass the mark scheme to the LLM. We found that doing so often restricted the responses and it was more difficult to credit answers that are correct but not included in the mark scheme. Marking is more accurate when the LLM can use its vast training data to deduce if an answer is correct rather than narrowing its scope.

After extensive testing we have been unable to reliably mark questions requiring a calculation, so for now these questions remain not markable by AI.