NIST AI RMF for Actuaries

NIST AI RMF for Actuarial Boards: Why a Voluntary Framework Became Your Next Governance Priority

Article 1 of 4 in the series NIST AI RMF for Actuaries

For most of its life, the NIST AI Risk Management Framework has carried a small, important word on its cover: voluntary. Released in January 2023 by the US National Institute of Standards and Technology, it was written to be sector-agnostic, rights-preserving, and useful to any organisation that wanted to adopt it, from a two-person analytics team to a multinational reinsurer. Three years on, something quieter and more consequential has happened. For the actuarial profession and the insurance industry it serves, NIST AI RMF is no longer simply a voluntary good-practice guide. In our view, it has become the de facto operating spine of AI governance, and boards are now accountable for the governance of AI risk, with NIST AI RMF the most likely framework their organisation has adopted to meet that accountability.

This brief explains how that shift happened, why it matters at board level, and what a chief actuary, chief risk officer or board member can do about it this quarter. It is the first in a four-part series written for actuarial leaders and their technical teams. The later articles move into practitioner-level detail. This one is for the room where the strategic decisions are made.

A note on registers: law, supervisory expectation, professional standard, recommendation

Before going further, a quick note on how this series uses different categories of authority. Conflating them is the most common avoidable mistake in AI governance discussions, and it tends to produce either complacency (treating supervisory expectations as merely advisory) or unnecessary alarm (treating supervisory bulletins as if they were statute). Throughout the series, we use four registers:

  • Law is binding regulation or statute. Examples: the EU AI Act and Colorado Regulation 10-1-1 are both binding law in their respective jurisdictions.
  • Supervisory expectation is guidance issued by a regulator that sets out what the regulator expects to see, often framed in “should” language, and that regulators may use as the reference point in examinations and inquiries. The NAIC Model Bulletin is the canonical example for US insurance.
  • Professional standard is binding within a profession on its members. ASOP 56, TAS 100 and APS X2 v1.1 are professional standards binding on actuaries practising under the relevant body.
  • Author recommendation is our own practitioner judgement, based on what we have seen work in client engagements. Where this article uses phrases like “in our view” or “in our experience”, that is what is being signalled.

These categories are not equivalent in legal force, but each carries weight in its own way. A board that treats only the first as worth attention will be surprised in an examination. A board that treats all four as if they were the first will overspend.

How a voluntary framework became operationally compulsory

The NIST AI RMF does not itself regulate anything. Its authority is indirect, which is exactly what makes it powerful.

Follow the thread. In December 2023, the National Association of Insurance Commissioners adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, which sets out supervisory expectations that insurers in adopting jurisdictions will implement and maintain a written AI Systems Program with documented governance, risk management, validation, transparency, third-party oversight and board-level accountability. The Bulletin is framed largely in “should” language and is explicit that an AIS Program may adopt, incorporate or rely on a framework such as NIST AI RMF, and that insurers may demonstrate compliance through alternative means; it does not prescribe specific practices or documentation. As of 1 April 2026, 24 US states plus the District of Columbia have adopted the Model Bulletin in full or with minor modifications. The Bulletin does not name NIST AI RMF, but its structure, terminology and expected artefacts align so closely with the RMF’s four functions that, in our experience, many insurers and their advisers now use the RMF as their implementation reference. (Register: supervisory expectation in adopting jurisdictions.)

In Colorado, the connection is even tighter. Colorado Regulation 10-1-1 was first adopted in 2023 and applied to life insurers using external consumer data and information sources. As amended effective 15 October 2025, the regulation now covers insurers offering individual life insurance, private passenger automobile insurance and health benefit plans that use ECDIS or algorithms and predictive models that use ECDIS. It requires a documented governance and risk management framework, a written risk assessment rubric, and quantitative testing for unfairly discriminatory outcomes, although the Division’s specific quantitative testing standards remain under active development and stakeholders continue to provide input on methodology. The language is permissive in places, prescriptive in others, and unmistakably aligned with the NIST structure. (Register: law in Colorado for in-scope lines.)

In Europe, the EU AI Act takes a different route to the same destination. Annex III 5(c) classifies AI systems used for risk assessment and pricing in relation to natural persons in the case of life and health insurance as high-risk. Under the current timetable, from 2 August 2026 those systems come within the Act’s full obligation set, split between provider obligations (Article 11 technical documentation, Article 43 conformity assessment, Article 72 post-market monitoring, Article 73 serious incident reporting) and deployer obligations (Article 26 on human oversight, input data control where applicable, use monitoring, log retention, and informing the provider and competent authorities when risks or serious incidents arise). Deployers of high-risk Annex III 5(c) systems are also required to carry out a Fundamental Rights Impact Assessment under Article 27, and affected individuals have explanation rights under Article 86. Where an insurer builds the system for its own use, it is both provider and deployer and carries both stacks. Where it uses a vendor system, the stack splits. The European Commission has publicly noted a proposal under consideration to adjust parts of the high-risk timeline, so teams should treat the 2 August 2026 date as the current timetable rather than a fixed certainty. Whether any specific system is in scope is a fact-specific determination under Article 6 and Annex III, and ultimately a legal question for the deployer. For in-scope life and health insurance systems, the Act creates a combined provider/deployer compliance stack; the RMF is a practical scaffold for assembling the evidence both sides of that stack require. (Register: law for in-scope EU systems.)

“Voluntary at the top, prescriptive at the bottom, and everywhere in the middle the NIST AI RMF is the common reference.”

The UK position is consistent. The Institute and Faculty of Actuaries published its thematic review Actuaries using data science and artificial intelligence techniques on 26 February 2024, and the profession’s existing framework, comprising the Actuaries’ Code, Technical Actuarial Standard 100 and Actuarial Profession Standard X2 (version 1.1, effective 30 January 2026), already requires much of the substance of what the RMF asks for, expressed in the profession’s own language. The American Academy of Actuaries reached the same conclusion in its 2024 professionalism discussion paper Actuarial Professionalism Considerations for Generative AI, which sets out how ASOP 56 on modelling, ASOP 23 on data quality and ASOP 41 on actuarial communications apply to AI and generative AI work. The discussion paper is an interpretive professionalism resource rather than a binding promulgation; the binding force still sits with the ASOPs themselves. (Register: professional standard for in-scope work.)

The thread is clear. Voluntary at the top, prescriptive at the bottom, and everywhere in the middle the NIST AI RMF is the common reference that lets actuaries, regulators, auditors, lawyers and engineers talk about the same things using the same vocabulary.

The convergence, at a glance

The table below is the single most useful artefact in this article. It shows how equivalent governance concerns surface under different names in the frameworks a multi-jurisdictional actuarial function now has to navigate. Keep it to hand at your next model risk committee. ASOP 56 section references have been verified against the published standard (Doc. No. 195, December 2019). APS X2 references are to version 1.1, effective 30 January 2026.

A word on what this table is, and is not. The rows below are functional bridges between frameworks, not one-to-one legal equivalences. Each cell shows the closest analogue in the relevant vocabulary to the function in the NIST column. In several places the analogue is weaker or broader than the NIST function it sits alongside. The subcategory-level bridging table in Part 2 makes the gaps explicit where they matter. EU AI Act cells are annotated where the obligation sits primarily with the provider versus the deployer.

NIST RMF FunctionNAIC Model Bulletin (US)Colorado Reg 10-1-1 (as amended Oct 2025)EU AI Act (life and health, Annex III 5(c))UK / US actuarial standards
GovernWritten AIS Program; AI inventory; board accountability; cross-functional governanceGovernance and risk management framework; risk assessment rubric; annual reviewArticle 9 risk management system (provider); Article 17 quality management (provider); Article 26 deployer governance dutiesTAS 100 Principle 5 (Models); Actuaries’ Code
MapDocumented intended use; third-party vendor oversightECDIS inventory; documented use cases; vendor assessmentArticle 13 transparency to deployers (provider); Article 27 Fundamental Rights Impact Assessment (deployer; expressly reaches Annex III 5(c))ASOP 56 §3.1 (model meeting intended purpose); TAS 100
MeasureValidation, testing, bias examination, documentationQuantitative testing for unfair discrimination (standards in development); qualitative assessmentArticle 10 data governance and bias examination (provider); Article 15 accuracy, robustness, cybersecurity (provider)ASOP 56 §3.6 (model testing and validation); ASOP 23 (Data Quality)
ManagePost-deployment monitoring; incident response; corrective actionOngoing monitoring; remediation of discriminatory outcomesArticle 72 post-market monitoring (provider; with deployer Article 26 duties to monitor in use and inform); Article 73 serious incident reporting (provider; deployers have related duties under Article 26 to inform the provider and authorities)ASOP 56 §3.6.4 (governance and controls, adjacent to decommissioning); §3.1.3 (using the model); APS X2 v1.1 (peer review)

Read across any row and you have the closest functional analogue across five vocabularies. The vocabulary changes. The evidence overlaps heavily, though not perfectly. The actuarial standards column is usefully strong on Measure and Manage substance but contains no direct counterpart to NIST’s explicit treatment of decommissioning (GOVERN 1.7), user feedback and appeal processes (MEASURE 3.3), incident communication to affected actors (MANAGE 4.3), or the RMF’s emphasis on diverse and multidisciplinary perspectives and on impacted individuals and communities. Part 2 of this series surfaces these gaps at subcategory level.

The four functions, at board level

The RMF is built around four functions that operate as an iterative cycle rather than a linear process.

Govern is the foundation. It asks whether the organisation has the policies, roles, training, risk tolerance, inventory, third-party oversight and decommissioning processes to manage AI responsibly. At board level, Govern answers the question who owns this.

Map asks whether the organisation understands what each AI system actually does, who it affects, which laws and norms apply, what the intended purpose is, and where the boundaries of acceptable use sit. At board level, Map answers do we know what we have.

Measure is the evidence function. It asks whether the organisation can quantify and qualitatively assess validity, reliability, safety, security, transparency, explainability, privacy, fairness and environmental impact. At board level, Measure answers can we prove it works and is safe.

Manage is the response function. It asks whether the organisation prioritises risks correctly, responds to incidents, monitors deployed systems, supersedes or decommissions systems when they no longer earn their place, and treats residual risk with clear accountability. At board level, Manage answers what happens when something goes wrong, and how quickly.

For most mature actuarial teams, three of these four are largely in hand under existing professional standards. At board level, governance structure is typically the first visible gap; at practitioner level (which Part 2 of this series addresses in detail), fairness, explainability and socio-technical evidence create the heaviest new workload.

Where actuaries are strong, and where the gaps are

Actuaries have been doing model risk management, data quality assessment, sensitivity testing, back-testing, documentation and peer review for a generation. ASOP 56 and TAS 100 codify those practices in detail, and they map cleanly onto the RMF’s Measure and Manage functions. That is the encouraging news. An actuarial team operating to professional standard is already producing most of the evidence the RMF asks for, under a different name.

Three gaps tend to open up when organisations lay the RMF against their current practice.

The first is bias and fairness testing. Traditional actuarial validation looks at predictive accuracy, calibration and stability. It does not, by default, examine disparate impact, subgroup performance, proxy discrimination, or fairness metrics such as demographic parity or equalised odds. Colorado Regulation 10-1-1, as amended in October 2025, now requires this for insurers offering individual life, private passenger automobile and health benefit plans that use ECDIS, and the EU AI Act will require it from 2 August 2026 for AI systems used for risk assessment and pricing in life and health insurance under Annex III 5(c). This is a new artefact category for many actuarial teams.

The second is explainability and interpretability. Generalised linear models, the traditional actuarial workhorse, are transparent by construction. Gradient boosting, neural networks and large language models are not. The RMF’s MEASURE 2.9 subcategory asks for evidence that a model is explained, validated and documented, and that outputs are interpreted within the scope of intended use. Translating that into artefacts that satisfy a regulator or a board means producing global and local explainability evidence using techniques such as SHAP, LIME, partial dependence plots and counterfactual analysis. Part 2 of this series will show exactly how to produce these artefacts from a working model.

The third is socio-technical context and human oversight design. The RMF treats every AI system as a socio-technical system, which means the human who uses the output, the person affected by it, the operating environment and the plausible misuse paths are all in scope. This is where actuarial professionalism and AI governance meet most naturally, because actuaries have always framed their work around the client, the public interest and the downstream user. Making it an explicit, documented artefact is the new part.

The leadership opportunity for the chief actuary

The NAIC Model Bulletin explicitly lists actuarial among the representative disciplines for the cross-functional governance committees it expects insurers to stand up. NIST calls for diverse and multidisciplinary perspectives in its governance guidance, naming senior leadership and the board as governance actors rather than calling out actuarial specifically. The EU AI Act uses role-based categories (provider, deployer, importer, distributor, human oversight function, competent authority) rather than profession-specific ones. Colorado Regulation 10-1-1 requires a governance structure that is cross-functional in substance, though the text does not single out actuarial in the way the NAIC bulletin does. Across these frameworks, the governance committee is expected to include representatives from business, actuarial, data science, underwriting, claims, legal, compliance and IT.

“The chief actuary is a strong candidate to chair the AI governance committee. Actuaries are among the very few members already required by their profession to sign off on model suitability.”

That composition is a leadership opportunity. Actuaries are among the very few members of that group whose profession already requires them, under their own standards, to sign off on model suitability, document assumptions and limitations, and take personal professional responsibility for the work. In most organisations, the chief actuary is a strong candidate to chair or co-chair the AI governance committee, and to set the standard for the evidence the committee expects to see. Organisations that recognise this early will find that their AI governance programme runs more smoothly, produces better artefacts, and holds up better under regulatory scrutiny.

A thirty-minute board check

Here is a practical tool your board can use at its next risk or audit committee. It is designed to take about thirty minutes and to produce a clear read on where the organisation sits against the four RMF functions. Take the answers in writing.

FunctionQuestion
GovernDo we have a written AI inventory, and when was it last reconciled?
GovernWho chairs the AI governance committee, and how often does it meet?
GovernWhat is our risk tolerance statement for AI systems, and is it approved at board level?
GovernDo we have a decommissioning policy for AI systems, and has it ever been invoked?
GovernWhich named individual is accountable if a production AI system produces an adverse consumer outcome tomorrow?
MapFor each material AI system, have we documented the intended purpose, the operating context, the affected populations and the plausible misuse paths?
MapHave we identified which systems fall under Annex III of the EU AI Act, the NAIC Model Bulletin definition, or Colorado Regulation 10-1-1 as amended October 2025?
MapFor any system in scope of the EU AI Act, do we know whether we are acting as provider, as deployer, or as both?
MapDo we know which of our production systems rely on third-party models, and do we have contractual audit rights?
MeasureWhat validation artefacts do we produce beyond predictive accuracy and calibration?
MeasureDo we have documented fairness testing for any system that affects consumers, and what methodology do we use?
MeasureDo we have explainability evidence for any non-linear model, and in what form?
MeasureCan we produce, in the next seven days, a MEASURE evidence pack for our most consequential production model?
ManageWhen was the last time we detected drift in a production AI system, and what did we do about it?
ManageWho is accountable for AI incident response, and what is the target response time?
ManageIs residual risk formally accepted by a named owner, and is that acceptance reviewed annually?

If the answers come back confident and documented, the organisation is ahead of the curve. If they are hesitant or verbal, the organisation has work to do, and the Part 2 article in this series is written to help the technical teams do it.

On 7 April 2026, two days before this article was published, NIST released a concept note for an AI RMF Profile on Trustworthy AI in Critical Infrastructure. The profile, once developed, will give critical infrastructure operators a concrete set of risk management practices to follow when deploying AI-enabled capabilities. For actuaries working with energy, water, transport, healthcare and telecommunications clients, the concept note is worth watching, because it signals how NIST intends to carry the core RMF into sector-specific territory. NIST is currently highlighting the GenAI Profile and Critical Infrastructure profile tracks alongside the core RMF and Playbook; we have not seen an official insurance or pensions profile announced at the time of writing. The direction of travel suggests a sector-specific extension for financial services or insurance is plausible in time, though not confirmed. Organisations that have already internalised the core RMF will absorb any such sector profile with minimal friction.

The practitioner series

This article is the strategic entry point to a four-part series.

The RMF has become the common language. The profession is well placed to speak it fluently.

Ready to operationalise NIST AI RMF and the Generative AI Profile in your actuarial function?

Talk to our team about how Globebyte can help you build the governance structures, the eval suites, the RAG systems and the MEASURE evidence packs. From strategic alignment to working code.

Explore our services

Ready to explore AI for your organisation?

Talk to our team about how Globebyte can help.

More insights