// CEFR · PLACEMENT
CEFR proficiency mapper.
Place a student at A1, A2, B1, B2, C1, or C2 using the standard CEFR Can-Do statements. Pick the mode that matches the moment.
// HOW IT WORKS
What this tool does.
The CEFR mapper places a language student at A1, A2, B1, B2, C1, or C2 using the standard Council of Europe Can-Do framework. Two modes, depending on what you have to work with.
Rule-based mode runs entirely in your browser. Click through Can-Do statements ("can introduce themselves and answer simple personal questions", "can write a short connected text on familiar topics") and answer Yes / Partial / No. The level falls out the bottom. No data leaves your device. Use it when you're with the student and want to walk through the rubric together.
AI mode reads a writing sample (50–500 words) and places the student based on what they actually produced. Better for new enquiries where the self-report is vague ("I think she's intermediate") and you want a sanity check before lesson one.
What it isn't. A placement test for institutional reporting. If a school, language centre, or exam-prep cohort needs a moderated placement, use a four-skills test that includes reading, listening, and speaking. This is for the private-tutor decision: do I start at A2 or B1? The output is a level + a short confidence note. Use it to set the first lesson, then adjust after one or two real sessions.
Use it when: a parent enquiry arrives describing a student as "intermediate" and you have a 10-minute writing sample to look at, or when you're walking a returning student through where they sit on the framework so they can see their own progress.
// FAQ
Honest answers.
Will this replace a real placement test?
No. It replaces the moment in lesson one where you're guessing whether to start at A2 or B1 because the parent enquiry email said "intermediate".
For institutional placement (school, language centre, exam-prep cohort), use a moderated test with reading, listening, speaking, and writing components. This is for the private-tutor decision before the first lesson.
Why two modes?
Rule-based runs entirely in your browser — pick Yes / Partial / No against CEFR Can-Do statements, get a level, no data leaves your device.
AI mode reads a writing sample (up to 500 words) and places the student based on what they actually produced — better for new students who give vague self-reports.
Is the writing sample private?
AI mode sends the sample to Anthropic's API for placement. We don't store it, log it, or share it. Rule-based mode never leaves your browser at all.
Whichever you pick, anonymise the sample first — there's no reason for the student's name to be in it.
What does the placement actually return?
A level (A1–C2), a confidence note, and the strongest signals the model used to place there. Use it to set the level for the first lesson plan and the first worksheet — and adjust after one or two real sessions.