Most credentialing programmes still operate a one-tier misconduct model. A candidate either passed an assessment honestly or they did not, and if they did not, the consequences are the same regardless of what happened. That model never worked particularly well, and under AI conditions it has stopped working at all. A candidate who forgot to disclose a permitted spell checker and a candidate who paid an impersonator to take their exam are both flagged as “misconduct” and both face the same opaque process. The first appeals successfully and damages confidence in the system. The second never gets caught because the system is too crude to focus on them.
The fix is a proportionate, three-tier misconduct framework that distinguishes what kind of breach occurred, treats each kind appropriately, and gives both candidates and the credential owner a defensible structure to operate inside. This article describes that framework, walks through worked examples at each tier, and shows how it connects to the candidate AI policy, the appeals process, and the evidence standards your credentialing programme needs to maintain.
Why one tier is not enough
A single misconduct category creates three problems that show up immediately under AI conditions.
The first problem is proportionality. AI has expanded the range of things candidates can do that count as “not entirely their own work”, from checking spelling all the way to submitting a fully fabricated portfolio. Treating every point on that range as the same offence is unfair, indefensible, and inconsistent with how human decision-makers actually think about culpability. Appeals panels can see the difference. If your framework cannot, you lose at appeal.
The second problem is investigative focus. When everything is misconduct, every flag has to be investigated to the same depth, which means real cases get diluted in the noise of minor breaches. A focused framework lets the organisation invest its investigation effort where the harm is greatest. That is both more effective at protecting the credential and fairer to candidates whose minor errors do not deserve forensic scrutiny.
The third problem is candidate behaviour. Clear rules with proportionate consequences encourage compliance. Vague rules with severe consequences encourage candidates to hide what they have done, because the downside of disclosure looks the same as the downside of being caught. A tiered framework that rewards honesty and reserves the harshest outcomes for genuine fraud aligns the incentives correctly.
Both Ofqual and the Joint Council for Qualifications expect a proportionate, evidence-based approach to handling misconduct. The three-tier framework is the simplest structure that meets that expectation while remaining operationally workable for credentialing organisations of any size.
The three tiers at a glance
- Tier one — administrative breach. Procedural error, work still demonstrably the candidate’s own. Corrective response: warning, correction, candidate education.
- Tier two — substantive misrepresentation. AI or unauthorised assistance materially affects the evidence of competence. Component fail or attempt invalidation, formal record, appealable.
- Tier three — fraud or security breach. Impersonation, organised cheating, secure content theft. Disqualification, ban period, regulatory or law-enforcement reporting where appropriate.
Tier one: administrative breach
Tier one covers cases where the candidate has fallen short of a procedural requirement, the work is still demonstrably their own, and no credential decision has been materially affected. These are the cases that, under a one-tier model, generate appeals and waste investigative effort.
Typical tier one cases include:
- the candidate used a permitted AI tool but did not complete the required disclosure statement
- the candidate disclosed AI use but the disclosure was incomplete or inaccurate in minor ways
- the candidate followed an out-of-date version of the assessment instructions
- the candidate did not save a draft history file that the assessment policy required them to retain
In each case, the underlying work is the candidate’s own, the breach is procedural rather than substantive, and a reasonable observer would conclude that no advantage was gained over candidates who followed the rules. The proportionate response is corrective: a warning, a correction request, a requirement to complete additional candidate education, or a note on the file. The credential decision itself is unaffected.
Two principles make tier one work in practice. The first is that the candidate should be told clearly why the action was tier one and what the consequence is, so that the same breach is not repeated. The second is that tier one cases should be tracked, because a pattern of repeated tier one breaches by the same candidate or in the same component is a signal worth investigating, even when no individual breach merits escalation.
Tier two: substantive misrepresentation
Tier two covers cases where AI or other unauthorised assistance has materially affected the evidence of competence. The candidate has presented work as their own that is not, the work was used in a credential decision, and the integrity of that decision is in question.
Typical tier two cases include:
- the candidate used generative AI to produce substantive content for a take-home component without disclosure, where the construct required independent reasoning
- the candidate fabricated references, sources, or case data with the assistance of AI tools
- the candidate copied AI output as their final answer with minimal modification, in a component where the construct was unaided analysis
- the candidate submitted a portfolio entry that misrepresented their personal involvement in workplace evidence
The key word is “substantive”. The breach has to materially affect what the credential is supposed to certify. If the construct statement says the assessment measures unaided analytical reasoning, and the candidate had AI do the analytical reasoning, that is tier two. If the construct allowed AI assistance with disclosure and the candidate failed to disclose, that is also tier two if the failure to disclose itself materially affected how the work would be evaluated.
The proportionate response at tier two is component fail or attempt invalidation, with a formal record on the candidate’s file. The candidate may be permitted to retake the component or the attempt under conditions appropriate to the credential. The decision is appealable, and the appeal is heard against the documented evidence and the construct statement for the component.
Tier two is the category that requires the most discipline in practice, because it is where most of the genuinely contested cases sit. The framework only holds up if the construct statements are clear, the disclosure rules were communicated to the candidate before the assessment, and the evidence supporting the tier two decision is documented in a form an appeals panel can review.
Tier three: fraud or security breach
Tier three covers cases where the integrity of the credential itself has been attacked. Impersonation, organised cheating, secure content theft, and deliberate bypass of the assessment controls all fall here.
Typical tier three cases include:
- the candidate paid another person to take the exam on their behalf
- the candidate participated in an organised arrangement to share live exam content
- the candidate gained unauthorised access to secure assessment materials before the exam
- the candidate used technical means to bypass identity verification or proctoring controls
- the candidate falsified credentials, transcripts, or eligibility documentation to gain access to the assessment
These are not procedural breaches and they are not misjudged AI use. They are deliberate attempts to defraud the credentialing system. The proportionate response is disqualification, a defined ban period from re-taking the credential, and reporting where the credential is regulated or where law enforcement involvement is appropriate.
Tier three cases are the rarest of the three but the highest in stakes. They are also the cases where the evidence standards have to be the highest, because the consequences are severe and the risk of legal challenge is real. The framework should be explicit that tier three decisions require corroborating evidence beyond a single AI flag or anomaly signal, and that the investigation includes multiple sources of evidence reviewed by experienced staff.
How to assign a case to a tier
The honest answer is that tier assignment is a judgment call, and the framework only works if the people making the calls have shared standards. Three questions help structure the judgement.
Three questions for tier assignment
- Does the work submitted represent the candidate’s own competence? If yes, even with procedural errors, the case is tier one. If no, the case is at least tier two.
- Was the breach deliberate? If the breach was a procedural error or a misunderstanding, it is more likely to sit in tier one. If the breach was a deliberate attempt to gain advantage, it sits in tier two or tier three depending on what was attempted.
- Was the credential itself attacked? If the breach affected one candidate’s submission, it is tier two. If the breach involved impersonation, organised cheating, or secure content theft, it is tier three.
These questions do not produce automatic answers in every case. They produce a structured conversation that the investigation team can have with the same starting framework every time. Consistency across cases is the goal, not certainty in advance.
What the framework needs to be operational
A tiered framework only works if the supporting infrastructure is in place. Five things have to be true.
The first is that the candidate AI policy clearly states what is allowed, what is prohibited, and what disclosure is required, broken down by assessment type. Without this, candidates cannot be held to standards they were not told about. Our companion article on the Ofqual and JCQ guidance covers the policy structure.
The second is that the construct statements for each assessment component are documented and current. Without these, the question “did the breach materially affect the evidence of competence” has no anchor. Our companion article on the construct decision covers the format.
The third is that the investigation process itself is documented, with defined evidence standards for each tier, defined decision-makers, and defined timelines. Tier one decisions can be made quickly by assessment operations staff. Tier two decisions need a senior reviewer and a documented case file. Tier three decisions need a panel, legal review, and the highest evidence threshold.
The fourth is that the appeals process is explicit and visible to candidates. Every adverse decision should come with a clear explanation of which tier was assigned, what the evidence base was, and how to appeal. The appeals process should be timely, structured, and independent of the original investigation.
The fifth is that detection signals are treated as triggers for investigation, not as decisions. AI writing detector outputs, proctoring flags, and similarity scores all belong at the start of the investigation, not the end. A misconduct decision that rests solely on a detection tool’s output is the kind of decision that fails at appeal and damages institutional credibility.
“A misconduct decision that rests solely on a detection tool’s output is the kind of decision that fails at appeal.”
Worked examples
Three brief examples make the framework concrete.
A candidate submits a take-home professional report. The assessment policy permits AI use for grammar and clarity with disclosure required. The candidate used Grammarly extensively to improve the writing, then forgot to fill in the disclosure section. The work itself reflects their own analysis and recommendations. This is tier one. The proportionate response is a corrective requirement to complete the disclosure and confirmation that the candidate understands the policy. The credential decision stands.
A candidate submits a take-home case analysis. The construct statement specifies that the assessment measures the candidate’s own analytical reasoning, and the policy prohibits substantive AI generation. The investigation finds that the analysis section was generated by an AI tool from a single prompt and submitted with minimal editing. The candidate did not disclose the AI use. This is tier two. The proportionate response is component fail and a formal record. The candidate may retake under specified conditions. The decision is documented and appealable.
A candidate sits a secure proctored licensure exam at a test centre. The proctor identifies that the person sitting the exam does not match the photo identification provided at registration. Investigation confirms that the registered candidate paid a third party to take the exam in their place. This is tier three. The proportionate response is disqualification, a ban period, and reporting to the relevant regulatory body. Both the registered candidate and the substitute may face additional consequences depending on jurisdiction.
In each example, the tier reflects what actually happened. The same overall framework supports very different outcomes for very different circumstances, which is what proportionality is supposed to deliver.
Why this is fairer and more defensible than the alternatives
“A tiered framework is not softer on misconduct. It is more focused on the misconduct that matters.”
A tiered framework is not softer on misconduct. It is more focused on the misconduct that matters. Tier one cases that previously consumed investigative time are handled efficiently, freeing capacity for the tier two and tier three cases that genuinely need it. Tier two cases get the structured investigation and evidence standard they need to survive appeal. Tier three cases get the highest scrutiny and the firmest consequences because they deserve them.
For candidates, the framework is fairer because it distinguishes between procedural slips and deliberate fraud, and reserves the harshest outcomes for behaviour that actually warrants them. For credential owners, it is more defensible because the structure shows that decisions are made on principled grounds rather than reflex.
For the credential itself, the long-term effect is the most important. Public confidence in the credential depends on the perception that misconduct is handled seriously when it matters and proportionately when it does not. A one-tier model fails both halves of that test. A three-tier model passes both, if it is operated consistently.
“Public confidence in the credential depends on the perception that misconduct is handled seriously when it matters and proportionately when it does not.”
Implementation
Building a tiered framework into an existing credentialing programme is mostly policy work. The framework itself can be drafted and approved in a few weeks. The investigation procedures, the evidence standards, the appeals route, and the candidate-facing communication take longer, but none of it is technically difficult. The harder work is the cultural shift from a single-category mindset to a structured one, and the discipline of applying the framework consistently across cases.
Three steps to a working three-tier model
- Draft the framework with the tier definitions, the proportionate responses, and the assignment questions. Get sign-off from assessment operations, legal, and the credential owner.
- Update the candidate AI policy and the candidate-facing materials so that candidates know what each tier means and what the consequences are.
- Train the investigation team on the new framework, including walkthroughs of worked examples like the ones in this article, so that consistency is built in from the start.
After that, the framework needs to be exercised on real cases. The first few will surface edge cases the framework does not perfectly handle. Those become refinement opportunities, not failures. Within a year, a programme operating the framework consistently will have a more defensible misconduct posture than it had under any one-tier alternative, and it will be doing less unnecessary investigative work in the process.
That is the test. Fairer to candidates, more focused for the organisation, more defensible at appeal. The structure delivers all three, and the work to put it in place is small relative to the protection it provides.
Ready to replace a one-tier misconduct model with a proportionate framework that survives appeals?
Talk to our team about how Globebyte can help you draft the tiers, the procedures, and the candidate-facing materials.