Human + AI, better together.

From rubric to report. Conversational, written and Excel-based assessment. AI grading on a transparent rules engine. Multi-modal feedback for every learner. Governance you select per assessment.

Read the deep dives
Assess for Learning platform showing AI Copilot Scoring with side-by-side learner submission and AI grading review
Train the Grader Mode interface

Pre-launch

Train the Grader Mode

Before the first script is marked, create alignment. Generate synthetic submissions for each grader, practice applying criteria, and review readiness with alignment reports.

  • Create a bank of synthetic attempts that reflect your rubric
  • Run calibration rounds; capture rationales and edge-cases
  • Sign-off when variance is inside tolerances
What's included:
  • Synthetic submission generator
  • Calibration rounds & variance tracking
  • Readiness & alignment report export
AI Copilot grading comparison view

During grading

AI Copilot

Choose the flow that fits your governance. Let AI grade first for speed, then review; or grade internally first, then compare with AI. Mix and match outcomes and comments.

  • Side-by-side comparison of human and AI outcomes
  • Selective adoption of scores and feedback
  • Audit trail with who/what/when for each decision
Controls:
  • Policy-based thresholds for auto-accept vs human review
  • Role-based permissions and lock-step workflows
  • Exportable evidence for QA and appeals
Formula validation checking spreadsheet formulas

Spreadsheet assessments

Formula Validation

Automatically check spreadsheet formulas alongside results to provide richer, fairer feedback.

  • Rules > Actions toggle to enable checks
  • Support for common Excel functions and ranges
  • Feedback that differentiates method vs outcome
Why it matters:
  • Credit methodology as well as final answers
  • Reduce manual rework and back-and-forth
  • Improve consistency across graders
Competency framework diagnostics heat-map

Competency frameworks

Diagnostics Copilot

Connect grading outcomes directly to your competency framework. Model it once and the platform auto-tags each new assessment so results roll up cleanly to competencies.

  • Heat-map each learner against domains and levels
  • Highlight strengths and focus areas with targeted feedback
  • Deliver personalised outcomes in the candidate's report
What's included:
  • Framework modeller for domains, levels and descriptors
  • Automatic tagging of items & outcomes across assessments
  • Cohort and item heat-maps with report integration
Examiner's Report showing cohort summary

Post-session

Examiner's Report

Turn results into meaningful insight in minutes. Generate an overall performance summary, highlight strengths and weaknesses, and drill into every evaluation.

  • One-click cohort summary with visuals
  • Exportable to PDF/CSV
  • Share with educators and learners
What you get:
  • Summary trends & variance
  • Item-level analysis and exemplars
  • Recommendations for improvement

More platform capabilities

Beyond the headline features, each with a Principal Consultant deep-dive behind it.

Security & Governance

  • Three-tier governance selected per assessment
  • Transparent rules engine — every grading decision auditable
  • AI optional, configured per assessment
  • ISO 42001 / 23894 aligned, EU AI Act ready
  • AERA, APA, NCME and NCCA mapped
  • SSO (SAML/OIDC), RBAC, regional residency

Deployment & Integration

  • SCORM export to Moodle, Cornerstone, Docebo, Workday, Canvas, Blackboard
  • HTML deployment for non-SCORM environments
  • Claude on Amazon Bedrock — prompts stay in your AWS perimeter
  • API and Salesforce connector for custom workflows
  • Cloud with regional data residency, private tenant options

Insights from our Principal Consultants

Beyond the platform itself — the learner experience, governance authority, and credentialing strategy thinking behind the work.

Assess for Learning FAQs

No. The AI Copilot augments graders and keeps educators in control. You decide the flow and acceptance policy. Read how the grading copilot works →

AI runs inside a layered rules engine where every evaluation criterion is explicit and editable. The grading is auditable rule-by-rule. Why transparency is the architecture →

No. Assess for Learning exports as SCORM for Moodle, Cornerstone, Docebo, Workday, Canvas and Blackboard, and as HTML for everything else. How LMS integration works →

The three-tier governance model maps to AERA, APA, NCME, ISO 17024, ISO 42001 and the EU AI Act. EU AI Act for credentialing → · ISO 42001 explained →

Yes. Task-level grader routing sends each task to the right subject matter expert, the way real exam boards already operate. Read more →

Yes. Three-tier governance is selected per assessment, so the same platform runs everything from CPD practice to high-stakes summative under the right level of rigour. Three-tier governance →

Yes. Candidates speak their answers, the platform produces an intelligent summary, the candidate reviews and confirms. Conversational assessments →

A multi-modal bundle: detailed PDF, audio reflection podcast, avatar video, competency heat-map and next-step recommendations — all from one grading process. The feedback bundle →

We don't train foundation models on your data. Claude runs on Amazon Bedrock so prompts and context stay inside your AWS perimeter. SSO and RBAC enforce access; you control retention and residency.

Book a demo. We'll map a pilot to your rubric and show an end-to-end flow including the Examiner's Report and Precision Report.

See Assess for Learning in your context

Book a 30-minute demo. We'll map a pilot to your rubric, walk through the rules engine, AI Copilot, multi-modal feedback bundle and the Examiner's Report with your data shape.

Read all insights