AI Questions 1.3: Auto-Grade Handwritten Papers, Essays & Bubble Sheets — With Per-Answer Analysis
Grading exams used to take faculty days. Now it takes seconds.
For the past year Intrazero's AI Questions platform has been generating curriculum-aligned exam questions for universities, with over 10,000 questions delivered and Ajman University running the platform in production through their Moodle plugin.
Today we're shipping the missing other half: AI Questions can now grade the exams it helps create. Three new services join question generation in a single API:
- Auto-Grading — upload scanned answer sheets (PDF or image), the AI OCRs handwriting, compares against your answer key, and returns per-question scores with feedback.
- Essay Marking — paste an essay or upload a scan, define a rubric (or let the AI derive one from a model exemplar), and get per-criterion scores with evidence quotes pulled from the student's text.
- Bubble Sheet OMR — drop a stack of multichoice bubble sheets, the AI reads the marks, decodes the QR student ID, and produces a fully scored gradebook in seconds.
The headline feature: structured per-answer analysis
Every graded answer now comes with an analysis JSON object that goes far beyond a simple correct/incorrect verdict. For each question, the AI returns:
- Correctness: fully_correct, partially_correct, incorrect, or blank
- Key concepts covered — what the student got right
- Key concepts missing — important ideas they didn't address
- Errors — typed (factual / conceptual / computational / argumentation / grammar) with severity
- Strengths — specific things the student did well
- Improvement suggestions — actionable, question-specific tips
- Bloom's taxonomy level — recall, understanding, application, analysis, evaluation, or creation
- Confidence — the AI's certainty in its analysis (0–1), so you can flag low-confidence cases for human review
For paper exams, a handwriting sub-object adds: legibility, neatness score (0–10), detected language and script, whether the student crossed out marks or made visible corrections, plus free-form notes. For essays, you also get a language quality breakdown (grammar / clarity / structure / vocabulary, each scored 0–10), argument structure, evidence use, and an originality assessment.
Localised to the exam language
If you submit an Arabic exam with language=ar, every analysis field comes back in Arabic — error descriptions, strengths, improvement tips, even handwriting notes. Same for English, French, German, Spanish, Turkish, and 13 other languages. Teachers see feedback their students can actually read.
What it looks like in practice
Take a real Arabic placement-test paper graded today: the student wrote "٣" when the expected answer was "٩". The platform returns:
- Score: 0 / 20
- Correctness: incorrect
- Errors: { type: factual, severity: high, description: "الإجابة على عدد الفصول خاطئة حيث أجاب الطالب 3 بدلاً من 9" }
- Improvement: "مراجعة المادة المتعلقة بعدد الفصول في كتاب 'الأيام' لطه حسين قبل الإجابة"
- Handwriting: legibility clear, neatness 7/10, script Latin, no crossings-out
That's the kind of feedback that takes a faculty member five minutes per student to write — generated in two seconds, in the language of the exam, ready to ship straight back to the learner.
Built for any LMS
The whole platform is one REST API. Each customer gets an entity API key and a quota; calls are authenticated via Authorization: Bearer. Everything is documented at the interactive API reference — fully Swagger-tested with cURL snippets, schemas, and live "Try it out" support.
Ajman University is already running the platform through their Moodle plugin, deployed in collaboration with PanWorld Education. The same API powers integrations with iTest, Canvas-style LMSes, and any custom backend that can speak HTTP.
Per-feature quotas, never overspend
Quotas are tracked in four independent dimensions: question generation, auto-grading questions, essay marking, and bubble-sheet marking. Buy what you need; existing customers using only question generation see zero change to their existing flows. New grading features are gated per package — opt in when you're ready.
Available today
API and docs live at aiquestions.intrazero.com. Existing customers can request grading access from their account manager; new institutions can contact our team to enable it on their package.
