Skip to main content
Built for Higher Education — Developed by Academics

AI-assisted marking that preserves your academic judgment

Marking Assist generates rubric-aligned feedback drafts in seconds. You review, edit, and sign off every decision. Your expertise ensures quality and compliance.

Designed around the five Ofqual principles for AI in marking — safety, transparency, fairness, accountability, and contestability. Built and tested by practising UK lecturers.

Free starter credit on signup · No card required

Your Notes
AI Draft
Approved
1:15

Brief overview — what it is and why it matters

60%

Average time saved per submission

10+

Years academic expertise behind the design

5

Ofqual principles built into every workflow

100%

Human sign-off — always

Designed around the five Ofqual principles for AI in marking

Compliance is not an afterthought — it is the architecture

Safety & Robustness

Deterministic prompt pipeline — the same rubric always produces consistent, fair output.

Transparency

Every AI draft is labelled. Grader notes are kept separate from student-facing feedback.

Fairness

Rubric-anchored scoring removes subjective drift and marking fatigue across large cohorts.

Accountability

Human marker reviews, edits, and signs off every piece of feedback before it reaches a student.

Contestability

Full feedback versioning and audit trail supports re-marking requests and academic appeals.

The compliance story, visible in the UI

Three zones. Three colours. Amber, blue, green.

Every piece of feedback moves through three visually distinct zones. Your private notes are always amber. AI output is always oxford blue. Human-approved content is always emerald. The colour system encodes the compliance story into the interface itself.

Amber — Your Notes

Private grader observations, never shown to students

Blue — AI Draft

Always labelled as AI output, always awaiting your review

Green — Approved

Human sign-off confirmed — Ofqual compliant

Your Notes

Amber zone — your private grader notes

Strong methodology section — good grasp of mixed-methods rationale.

Literature review lacks recent sources (post-2020).

Grade: 62 — upper second.

1. You type your observations

Live demonstration — click zone tabs to explore

Why most AI marking tools get it wrong: automation bias

Research shows educators who see an AI grade first tend to accept it uncritically — even when it is wrong. Marking Assist shows you the AI's reasoning before its conclusion, in a private Grader Note. Skepticism is a feature, not a bug.

See it in action

The workflow

How Marking Assist works

Six steps that protect academic judgment at every point.

01

Create an Assignment Batch

What happens

Name your assignment, add a description, and upload supporting context files — rubric, marking guide, assignment brief, or sample submissions at different grade levels.

Why it matters

Setting context once anchors every submission in the batch. The AI reads all your context materials before touching a single piece of student work.

Marker thinking

"I shouldn't have to paste the rubric into every feedback prompt manually."

02

Upload Student Submissions

What happens

Drag-and-drop PDF, Word, PowerPoint, or plain text files. Optionally add a pre-assigned grade, Turnitin similarity score, and marker notes per submission.

Why it matters

All submission metadata is fed directly into the AI prompt. The more context you provide, the more calibrated the feedback.

Marker thinking

"The AI should know if I've already decided the grade — then it should justify that grade, not second-guess me."

03

AI Generates a First Draft

What happens

The AI reads the submission against your full context and produces two outputs: a private Grader Note (with grade recommendation + justification) and a student-facing Feedback Draft.

Why it matters

The grader note is the safeguard against automation bias — you see the AI's reasoning before you see its conclusion, keeping academic judgment in your hands.

Marker thinking

"I want a thinking partner, not a rubber stamp. Show me why you recommend that grade."

04

You Review with Full Oversight

What happens

Read the Grader Note, critically assess the AI recommendation, then move to the student feedback. The system never finalises anything without your explicit action.

Why it matters

This is the human-in-the-loop step that Ofqual requires. Research shows skeptical review of AI output — not blind acceptance — produces the best grading outcomes.

Marker thinking

"I should not simply accept what the AI says. My academic judgment is what matters."

05

Edit, Enrich, and Personalise

What happens

Edit the feedback inline. Insert standardised phrases from your Comment Bank. Add personal observations. Regenerate sections if needed.

Why it matters

The AI removes the blank-page burden. You add the human sensitivity, disciplinary nuance, and relational context that algorithms cannot replicate.

Marker thinking

"The draft is good but I want to reference something specific the student wrote — and add an encouraging note."

06

Share with Confidence

What happens

Copy to clipboard, download as plain text, or use the student reference link. A full version history and edit audit trail is stored automatically.

Why it matters

If a student queries their feedback or requests a re-mark, you have a complete record of every AI draft and every human edit — protecting both you and the student.

Marker thinking

"I need to be able to defend this grade if challenged. I need a paper trail."

Evidence from peer-reviewed research

What the research says

Marking Assist is designed around published findings on effective, compliant AI integration in higher education assessment.

AI can provide high-quality, real-time, and personalized feedback, fostering positive emotions and enhancing student motivation.

Systematic Review of AI in Higher Education Assessment, 2026

A hybrid approach — where AI provides initial drafts while human instructors review and refine — optimises the learning experience by combining technological efficiency with human sensitivity.

AI-Assisted Marking in Higher Education, 2026

Skepticism toward AI is a protective factor; participants skeptical of AI detect errors more reliably and achieve higher accuracy.

Automation Bias Research in AI-Assisted Assessment, 2025

Your time is valuable

Calculate your time savings

Based on published UK HE marking time benchmarks for undergraduate and postgraduate work.

ROI Calculator — Your Time Savings

Reading time is the same — savings come from AI-assisted feedback writing

Your Scenario

Reading benchmark: 5–10 min per submission

30
3

90 total submissions per term

Per-Submission Breakdown

Manual Marking

Read submission

5–10 min

Plan feedback structure

1–2 min

Write feedback (200–250 words)

7–11 min

Assign grade & enter into VLE

2–3 min

1525 min/submission

30.0 hrs/term

With Marking Assist

Read submission

5–10 min

Review AI draft & rubric check

1–2 min

Edit, personalise & enrich

0.5–1 min

Assign grade & copy to VLE

1 min

713 min/submission

15.0 hrs/term

Your Savings

50%

Time saved

15.0h

Hours per term

45.0h

Hours per year

5.6d

Work days / year

Reading time is preserved — every marker still reads every submission. Time reclaimed comes from the feedback drafting and writing phase.

Start saving time — it's free

Full walkthrough

See It In Action

A narrated 5-minute tour covering the complete workflow — from creating an assignment to approving and sharing feedback.

Watch Full Walkthrough

5 min
0:00Creating an Assignment Batch0:52Uploading Student Submissions1:38AI Feedback Generation+ 3 more

Honest comparison

Manual vs AI-only vs the hybrid approach

AI alone fails Ofqual. Manual alone fails scale. The hybrid model delivers both.

FeatureManual MarkingAI-OnlyMarking AssistHuman + AI Hybrid
Feedback speed15–35 min/submissionSeconds (unreviewed)Seconds + 3–5 min review
Rubric alignmentRelies on marker memoryAutomated but opaqueAutomated + human-verified
Consistency across cohortVariable (fatigue, drift)High but uncheckedHigh + audited
Academic judgmentFull human judgmentNone — AI decidesFull human judgment preserved
Ofqual compliant✓ (slow)✗ (AI cannot be sole marker)✓ Human signs off all decisions
Automation bias riskNoneVery highMitigated by grader note first
Audit trailOnly if logged manuallyNone typicallyFull version history built in
Student appeal supportPossibleDifficult to defendComplete evidence trail

Platform capabilities

Everything you need for responsible AI marking

Core AI

Context-Aware AI Engine

The AI reads your rubric, assignment brief, marking guide, and sample submissions — not just the student work — before generating a single word of feedback.

Versatility

All Submission Formats

PDF, Word (DOCX/DOC), PowerPoint (PPTX/PPT), and plain text — processed faithfully with full content extraction.

Calibration

Multi-Document Context

Upload a rubric, assignment brief, marking guide, and up to three graded sample submissions. The AI learns your standard before it grades.

Best Practice

Automation Bias Safeguard

Grader notes are always shown before AI conclusions. You see the reasoning, not just the output — keeping critical judgment in human hands.

Contextual AI

Education Level & Discipline

Set High School, Undergraduate, or Postgraduate level. Choose a disciplinary mode — STEM, Humanities, Creative Arts, or General.

Insights

Cohort Analytics

Per-assignment dashboards showing grade distribution, Turnitin risk segments, quality band breakdown, and at-risk student flags.

Efficiency

Batch Upload

Upload an entire cohort's submissions at once. Each file is processed as an individual submission, with AI warnings when student identifiers are missing.

Efficiency

Comment Bank

150+ system defaults across six quality-level categories. Add your own phrases and insert them into Original Notes with one click.

GDPR

Privacy-by-Design

All files stored in private Supabase Storage with per-user Row Level Security. No student data is used to train any AI model.

Pricing

Pay-As-You-Go Credits

No subscriptions. Buy exactly what you need. 1 credit = 1 AI generation. Unused credits never expire.

Simple pricing

No subscriptions. No surprises.

Buy credits when you need them. 1 credit = 1 AI feedback generation. Credits never expire.

Starter

£5

10 credits

  • 10 AI feedback generations
  • All file formats
  • Comment bank
  • Version history
  • Email support
Get started

Standard

£20

50 credits

  • 50 AI feedback generations
  • All file formats
  • Multi-document context
  • Full audit trail
  • Priority support
Get started
Most Popular

Professional

£60

200 credits

  • 200 AI feedback generations
  • Sample submission calibration
  • Grade recommendation notes
  • Batch processing
  • Priority support
Get started

Institution

£250

1,000 credits

  • 1,000 AI feedback generations
  • Department-wide comment banks
  • Admin analytics dashboard
  • Bulk upload tools
  • Dedicated support
Get started

All plans include a free starter credit on account creation. No card required to sign up.

AI that earns institutional trust

Built by academics. Tested by lecturers. Designed around the 2026 systematic review of AI in higher education assessment.

Human-in-the-loop

AI never makes a final summative decision. Every grade and feedback document requires human sign-off.

GDPR compliant

Student files are stored in private, per-user storage. No data is shared with third-party AI trainers.

Proven time savings

10–15 hours per staff member per week recovered in published UK college pilots using equivalent AI-marking workflows.

Create your free account