// MARKING · AI
Marking accelerator.
Paste a student's writing — get categorised error highlights plus three ready-to-paste feedback variants. The model speeds the work; you stay in charge of the call.
// HOW IT WORKS
What this tool does.
Marking is the part of tutoring that scales worst. A 200-word essay takes three minutes to read and twenty minutes to write feedback for, especially if you're switching between three students at three levels. Most of the twenty minutes is mechanical work: spotting which errors are systemic vs. one-off, sorting them into categories, picking which to flag and which to ignore, then writing the same kind of feedback in the same kind of voice you wrote last week.
Paste a sample (or upload a file or photo — we extract the text via OCR, you review it, the marking model only ever sees text). The output is categorised error highlights plus three drop-in feedback variants: warm (encouraging, suitable for a teen), direct (faster, suitable for an adult who's paying for honesty), and rubric-mapped (band descriptors if you've named an exam scheme). Pick one, edit a sentence, send.
What it isn't. Authoritative grading. The model categorises and proposes; you make the call. It's also not a moderated rubric — if you mark for a real exam board, treat its feedback as draft language, not as grade-bearing. Reading a long sample takes the model 5-15 seconds; cursive handwriting OCR isn't perfect; long C1-C2 samples sometimes get an obscure-but-valid alternative the model under-credits.
Use it when: you have five essays to mark before tomorrow morning and want to spend the time on the one student who needs the actual conversation, not on writing the same paragraph about subject-verb agreement four times.
// FAQ
Honest answers.
How accurate is the marking?
Suggestion-quality, not authoritative. The model is good at categorising visible errors (subject-verb agreement, articles, register slips) and proposing feedback in three voices. It is not a moderated rubric and not exam-board accurate.
The output does the mechanical sorting for you so the call — what to flag, what to ignore, how to phrase it — stays yours.
Does my student's writing leave my browser?
Yes. Pasted text and uploaded files are sent to Google Cloud Vision (for OCR on photos) and Anthropic (for the marking) for processing only. We don't store the text, log the body, share it, or sell it.
Photos go through Vision OCR client-side first — the marking model only ever sees text, never the image. Remove names and identifying details before pasting.
What if OCR misreads cursive?
Likely on long handwritten samples. The extracted text drops into the textarea below the upload zone — review it, fix misreads, then click Mark.
The marking model only ever sees the text you've reviewed, so a clean OCR pass means clean feedback.
Can I use it for marking schemes I already use?
Add a rubric tag (e.g., "GCSE Spanish writing 90-word essay") in the optional field and the model maps feedback to that scheme. Leave blank if you mark freely.
The three feedback variants (warm / direct / rubric-mapped) give you something to copy-paste into your normal feedback channel without rewriting from scratch.