Marking Assist generates rubric-aligned feedback drafts in seconds. You review, edit, and sign off every decision. Your expertise ensures quality and compliance.
Designed around the five Ofqual principles for AI in marking — safety, transparency, fairness, accountability, and contestability. Built and tested by practising UK lecturers.
Free starter credit on signup · No card required
Brief overview — what it is and why it matters
60%
Average time saved per submission
10+
Years academic expertise behind the design
5
Ofqual principles built into every workflow
100%
Human sign-off — always
Designed around the five Ofqual principles for AI in marking
Safety & Robustness
Deterministic prompt pipeline — the same rubric always produces consistent, fair output.
Transparency
Every AI draft is labelled. Grader notes are kept separate from student-facing feedback.
Fairness
Rubric-anchored scoring removes subjective drift and marking fatigue across large cohorts.
Accountability
Human marker reviews, edits, and signs off every piece of feedback before it reaches a student.
Contestability
Full feedback versioning and audit trail supports re-marking requests and academic appeals.
The compliance story, visible in the UI
Every piece of feedback moves through three visually distinct zones. Your private notes are always amber. AI output is always oxford blue. Human-approved content is always emerald. The colour system encodes the compliance story into the interface itself.
Amber — Your Notes
Private grader observations, never shown to students
Blue — AI Draft
Always labelled as AI output, always awaiting your review
Green — Approved
Human sign-off confirmed — Ofqual compliant
Amber zone — your private grader notes
Strong methodology section — good grasp of mixed-methods rationale.
Literature review lacks recent sources (post-2020).
Grade: 62 — upper second.
1. You type your observations
Live demonstration — click zone tabs to explore
Why most AI marking tools get it wrong: automation bias
Research shows educators who see an AI grade first tend to accept it uncritically — even when it is wrong. Marking Assist shows you the AI's reasoning before its conclusion, in a private Grader Note. Skepticism is a feature, not a bug.
The workflow
Six steps that protect academic judgment at every point.
What happens
Name your assignment, add a description, and upload supporting context files — rubric, marking guide, assignment brief, or sample submissions at different grade levels.
Why it matters
Setting context once anchors every submission in the batch. The AI reads all your context materials before touching a single piece of student work.
Marker thinking
"I shouldn't have to paste the rubric into every feedback prompt manually."
What happens
Drag-and-drop PDF, Word, PowerPoint, or plain text files. Optionally add a pre-assigned grade, Turnitin similarity score, and marker notes per submission.
Why it matters
All submission metadata is fed directly into the AI prompt. The more context you provide, the more calibrated the feedback.
Marker thinking
"The AI should know if I've already decided the grade — then it should justify that grade, not second-guess me."
What happens
The AI reads the submission against your full context and produces two outputs: a private Grader Note (with grade recommendation + justification) and a student-facing Feedback Draft.
Why it matters
The grader note is the safeguard against automation bias — you see the AI's reasoning before you see its conclusion, keeping academic judgment in your hands.
Marker thinking
"I want a thinking partner, not a rubber stamp. Show me why you recommend that grade."
What happens
Read the Grader Note, critically assess the AI recommendation, then move to the student feedback. The system never finalises anything without your explicit action.
Why it matters
This is the human-in-the-loop step that Ofqual requires. Research shows skeptical review of AI output — not blind acceptance — produces the best grading outcomes.
Marker thinking
"I should not simply accept what the AI says. My academic judgment is what matters."
What happens
Edit the feedback inline. Insert standardised phrases from your Comment Bank. Add personal observations. Regenerate sections if needed.
Why it matters
The AI removes the blank-page burden. You add the human sensitivity, disciplinary nuance, and relational context that algorithms cannot replicate.
Marker thinking
"The draft is good but I want to reference something specific the student wrote — and add an encouraging note."
What happens
Copy to clipboard, download as plain text, or use the student reference link. A full version history and edit audit trail is stored automatically.
Why it matters
If a student queries their feedback or requests a re-mark, you have a complete record of every AI draft and every human edit — protecting both you and the student.
Marker thinking
"I need to be able to defend this grade if challenged. I need a paper trail."
Evidence from peer-reviewed research
Marking Assist is designed around published findings on effective, compliant AI integration in higher education assessment.
“AI can provide high-quality, real-time, and personalized feedback, fostering positive emotions and enhancing student motivation.”
Systematic Review of AI in Higher Education Assessment, 2026
“A hybrid approach — where AI provides initial drafts while human instructors review and refine — optimises the learning experience by combining technological efficiency with human sensitivity.”
AI-Assisted Marking in Higher Education, 2026
“Skepticism toward AI is a protective factor; participants skeptical of AI detect errors more reliably and achieve higher accuracy.”
Automation Bias Research in AI-Assisted Assessment, 2025
Your time is valuable
Based on published UK HE marking time benchmarks for undergraduate and postgraduate work.
Reading time is the same — savings come from AI-assisted feedback writing
Reading benchmark: 5–10 min per submission
90 total submissions per term
Manual Marking
Read submission
5–10 min
Plan feedback structure
1–2 min
Write feedback (200–250 words)
7–11 min
Assign grade & enter into VLE
2–3 min
15–25 min/submission
30.0 hrs/term
With Marking Assist
Read submission
5–10 min
Review AI draft & rubric check
1–2 min
Edit, personalise & enrich
0.5–1 min
Assign grade & copy to VLE
1 min
7–13 min/submission
15.0 hrs/term
50%
Time saved
15.0h
Hours per term
45.0h
Hours per year
5.6d
Work days / year
Reading time is preserved — every marker still reads every submission. Time reclaimed comes from the feedback drafting and writing phase.
Start saving time — it's freeFull walkthrough
A narrated 5-minute tour covering the complete workflow — from creating an assignment to approving and sharing feedback.
Watch Full Walkthrough
Honest comparison
AI alone fails Ofqual. Manual alone fails scale. The hybrid model delivers both.
| Feature | Manual Marking | AI-Only | Marking AssistHuman + AI Hybrid |
|---|---|---|---|
| Feedback speed | 15–35 min/submission | Seconds (unreviewed) | Seconds + 3–5 min review |
| Rubric alignment | Relies on marker memory | Automated but opaque | Automated + human-verified |
| Consistency across cohort | Variable (fatigue, drift) | High but unchecked | High + audited |
| Academic judgment | Full human judgment | None — AI decides | Full human judgment preserved |
| Ofqual compliant | ✓ (slow) | ✗ (AI cannot be sole marker) | ✓ Human signs off all decisions |
| Automation bias risk | None | Very high | Mitigated by grader note first |
| Audit trail | Only if logged manually | None typically | Full version history built in |
| Student appeal support | Possible | Difficult to defend | Complete evidence trail |
Platform capabilities
The AI reads your rubric, assignment brief, marking guide, and sample submissions — not just the student work — before generating a single word of feedback.
PDF, Word (DOCX/DOC), PowerPoint (PPTX/PPT), and plain text — processed faithfully with full content extraction.
Upload a rubric, assignment brief, marking guide, and up to three graded sample submissions. The AI learns your standard before it grades.
Grader notes are always shown before AI conclusions. You see the reasoning, not just the output — keeping critical judgment in human hands.
Set High School, Undergraduate, or Postgraduate level. Choose a disciplinary mode — STEM, Humanities, Creative Arts, or General.
Per-assignment dashboards showing grade distribution, Turnitin risk segments, quality band breakdown, and at-risk student flags.
Upload an entire cohort's submissions at once. Each file is processed as an individual submission, with AI warnings when student identifiers are missing.
150+ system defaults across six quality-level categories. Add your own phrases and insert them into Original Notes with one click.
All files stored in private Supabase Storage with per-user Row Level Security. No student data is used to train any AI model.
No subscriptions. Buy exactly what you need. 1 credit = 1 AI generation. Unused credits never expire.
Simple pricing
Buy credits when you need them. 1 credit = 1 AI feedback generation. Credits never expire.
Starter
£5
10 credits
Standard
£20
50 credits
Professional
£60
200 credits
Institution
£250
1,000 credits
All plans include a free starter credit on account creation. No card required to sign up.
Built by academics. Tested by lecturers. Designed around the 2026 systematic review of AI in higher education assessment.
AI never makes a final summative decision. Every grade and feedback document requires human sign-off.
Student files are stored in private, per-user storage. No data is shared with third-party AI trainers.
10–15 hours per staff member per week recovered in published UK college pilots using equivalent AI-marking workflows.