AI Risk

NIST AI RMF for Financial Services: Crosswalk to SR 26-02, OCC 2026-13, and FS AI RMF

April 23, 2026 Rebecca Leung
Table of Contents

TL;DR:

  • On April 17, 2026, SR 26-02 and OCC Bulletin 2026-13 replaced the 15-year SR 11-7 model risk framework — with a major caveat: generative AI and agentic AI are explicitly excluded from scope.
  • That gap is filled by NIST AI RMF (the governance architecture) and the Treasury FS AI RMF (230 financial-services-specific control objectives released March 1, 2026).
  • Understanding how all three frameworks overlap — and where they diverge — is now the core literacy requirement for any financial services AI risk program.
  • This post provides the crosswalk: what each covers, where they align, and how to build a program that satisfies all three.

One week ago, the Federal Reserve, OCC, and FDIC jointly retired the model risk management framework that has governed bank model governance for 15 years. SR 26-02 (Federal Reserve) and OCC Bulletin 2026-13 replaced SR 11-7 and OCC 2011-12 on April 17, 2026. For a field that’s been running on the same framework since 2011, this was overdue.

But there’s a catch that matters for anyone building an AI risk program right now: SR 26-02 explicitly excludes generative AI and agentic AI from its scope, calling them “novel and rapidly evolving.” The agencies intend to issue separate guidance “in the near future.” In the meantime, you’re operating in a three-framework environment:

  1. SR 26-02 / OCC 2026-13 — updated model risk for traditional ML and quantitative models at banks >$30B
  2. NIST AI RMF 1.0 — sector-agnostic governance architecture for all AI, including GenAI
  3. Treasury FS AI RMF — 230 financial-services-specific control objectives that translate NIST AI RMF into operational requirements

If your program only touches one of these, it has gaps. Here’s how they fit together.


Framework #1: SR 26-02 and OCC Bulletin 2026-13

What it covers

SR 26-02 applies to traditional quantitative models — the kind that have lived in financial institutions for decades: credit scoring, stress testing, fraud detection algorithms, DFAST models, VaR calculations. The revised definition is deliberate: “complex quantitative methods using statistical, economic, or financial theories.” Simple rule-based systems, deterministic calculations, and — critically — LLMs and agentic systems are excluded.

The framework primarily targets banking organizations with over $30 billion in total assets, though examiners may apply the principles to smaller institutions with concentrated model risk exposure.

What changed from SR 11-7

SR 11-7 applied uniform validation rigor across all models regardless of their actual risk to the institution. SR 26-02 introduces a materiality construct that changes the calculation:

DimensionSR 11-7SR 26-02
Governance approachUniform rigor across all modelsRisk-based, tiered by materiality
Materiality assessmentImplicit, examiner judgmentExplicit: model exposure × model purpose
Validation frequencyPeriodic, often annualCalibrated to model complexity and change
MonitoringPrimarily post-deployment reviewsContinuous monitoring emphasized
Non-complianceSupervisory criticism exposureNo criticism for guidance alone; underlying risks still trigger safety/soundness
Vendor modelsCoveredGreater emphasis on ongoing monitoring
Generative AINot addressedExplicitly excluded

The materiality construct combines two factors: model exposure (how significant is this model’s output to business decisions?) and model purpose (does it support regulatory requirements or risk management?). High-materiality models still require rigorous, independent validation. Low-materiality models can leverage automated monitoring and lighter governance without the same level of examiner scrutiny.

This is a meaningful operational shift. A low-materiality internal forecasting tool no longer needs the same validation infrastructure as a fair lending credit model — which is how it should work.

What SR 26-02 doesn’t solve

The GenAI exclusion is a deliberate regulatory acknowledgment that the old framework isn’t equipped for these systems. Agencies stated they will issue separate guidance — but haven’t committed to a timeline. In the interim, if your organization is deploying LLMs, chatbots, or AI agents alongside traditional quantitative models, SR 26-02 tells you nothing about how to govern them.


Framework #2: NIST AI RMF 1.0

The sector-agnostic backbone

Published by NIST in January 2023, NIST AI 100-1 provides a voluntary framework organized around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Unlike SR 26-02, it has no asset threshold, no model type exclusion, and no regulatory teeth — but it’s become the de facto architecture that financial services regulators point to when SR 26-02 falls short.

NIST AI RMF’s four functions and how they map to financial services

GOVERN sets the foundation: AI risk culture, accountability structures, policies, model inventory, and board-level oversight. In financial services terms, this is your AI governance committee, AI use case registry, and the risk appetite statement that defines which AI applications require pre-deployment review.

MAP establishes context before you build or deploy: who is impacted, what are the foreseeable harms, what’s the deployment environment. In financial services, MAP is the risk framing you do before a new AI model goes to your credit committee or before a vendor AI tool gets onboarded through TPRM.

MEASURE is where testing and evaluation live: TEVV (Test, Evaluation, Validation, and Verification), bias measurement, performance metrics, ongoing monitoring cadences. This is the NIST function most aligned with SR 26-02’s validation requirements — though NIST extends it to GenAI use cases that SR 26-02 doesn’t touch.

MANAGE covers prioritization, treatment, and ongoing risk response: incident escalation, risk acceptance documentation, decommissioning triggers, and monitoring thresholds that trigger review. This maps directly to SR 26-02’s emphasis on continuous monitoring.

How NIST AI RMF fills the GenAI gap

Because NIST AI RMF has no model type exclusions, it’s currently the only federally endorsed governance framework that explicitly addresses LLMs, foundation models, and agentic AI systems. NIST’s companion document, AI 600-1 (the Generative AI Profile), extends the four functions specifically to GenAI risks — including confabulation, harmful bias, information security, and data privacy risks that don’t arise in traditional quantitative models.

For financial services firms deploying GenAI, NIST AI RMF + AI 600-1 is your governance foundation until regulators issue sector-specific guidance.


Framework #3: Treasury Financial Services AI RMF

What the FS AI RMF adds

Released March 1, 2026 by the U.S. Treasury in partnership with the Cyber Risk Institute and developed with over 100 financial institutions, the FS AI RMF is the operational layer that NIST AI RMF doesn’t provide. NIST tells you what functions a governance program should have; FS AI RMF tells you the 230 specific control objectives that financial services organizations should implement.

The framework ships in four components:

ComponentWhat It Does
AI Adoption Stage QuestionnaireSelf-assessment to calibrate which controls apply based on AI maturity
Risk and Control Matrix (RCM)230 control objectives mapped to risk statements across 5 domains
GuidebookImplementation guidance for how controls should be operationalized
Control Objective Reference GuideDetailed reference for each control objective with examples

The five FS AI RMF domains

The 230 control objectives are organized across five domains that map directly to financial services regulatory expectations:

  1. AI Governance — Committee structures, AI policies, model inventories, escalation paths
  2. Data Integrity & Bias Monitoring — Training data quality, fairness testing, disparate impact controls (ECOA/FHA alignment)
  3. Model Lifecycle Management — Development standards, validation, change management, decommissioning
  4. Third-Party AI Risk — Vendor due diligence, contractual controls, fourth-party model dependencies
  5. Consumer Protection — UDAAP controls, explainability for adverse action, complaint handling for AI-driven decisions

The AI Adoption Stage Questionnaire is particularly useful: it lets institutions scope their control obligations based on how deeply AI is embedded in their operations. An institution using AI only for back-office process automation has a fundamentally different risk profile than one using AI for credit decisioning or customer-facing interactions. The questionnaire calibrates the control burden accordingly.


The Three-Framework Crosswalk

Here’s how the three frameworks map to each other across the governance domains that matter most:

Governance DomainSR 26-02 / OCC 2026-13NIST AI RMF 1.0Treasury FS AI RMF
Governance structureBoard oversight, clear roles, independent challengeGOVERN function: policies, culture, accountabilityAI Governance domain: 50+ control objectives
Model inventoryModel registry expectedGOVERN: AI system inventoryRCM: AI use case cataloging
Risk framingUse case documentationMAP: context and impact analysisModel Lifecycle: pre-deployment framing
Validation / TEVVIndependent validation, tiered by materialityMEASURE: TEVV, bias, metricsData Integrity & Model Lifecycle domains
Ongoing monitoringEmphasized; continuous for frequently updated modelsMANAGE: monitoring thresholds, incident responseMonitoring controls across all 5 domains
GenAI / LLMsExplicitly excludedCovered by AI 600-1 GenAI ProfileCovered; includes LLM-specific controls
Vendor/third-party AIAddressed (vendor models)MAP + MANAGE: third-party dependenciesThird-Party AI Risk domain: dedicated controls
Consumer protectionImplicit (fair lending, UDAAP)GOVERN: impact assessmentConsumer Protection domain: explicit controls
ScopeBanks >$30B, traditional modelsAll AI, all institution sizesAll AI, financial services context
EnforceabilityPrinciples-based (no supervisory criticism for guidance alone)VoluntaryVoluntary; examiner reference

Building a Program That Covers All Three

Don’t build three separate programs

The temptation when you see three frameworks is to build three compliance programs. Resist it. The frameworks are complementary, not competing. A single AI risk management program — structured around NIST AI RMF’s four functions — naturally satisfies SR 26-02’s requirements for traditional models and FS AI RMF’s operational controls.

Think of it this way:

  • NIST AI RMF is your program architecture — the skeleton
  • FS AI RMF is your operational control library — the muscles
  • SR 26-02 is your examiner expectation for traditional ML — the regulatory floor

Coverage priority by model type

Model TypePrimary FrameworkSecondary Framework
Credit scoring (ML)SR 26-02 (validation, governance)FS AI RMF (bias monitoring, consumer protection)
Fraud detection (ML)SR 26-02NIST AI RMF MEASURE (ongoing monitoring)
Stress testing / regulatory modelsSR 26-02 (highest materiality)FS AI RMF (documentation)
LLM / chatbot (customer-facing)NIST AI RMF + AI 600-1FS AI RMF (consumer protection, UDAAP)
Agentic AI (autonomous workflows)NIST AI RMF + AI 600-1FS AI RMF (third-party risk, governance)
Vendor-supplied AI toolsSR 26-02 (if >$30B) + FS AI RMFNIST AI RMF (third-party dependencies)

The 90-day priority checklist

If you’re starting fresh or rationalizing an existing program against all three frameworks, prioritize in this order:

Days 1–30: Governance baseline

  • Complete the FS AI RMF AI Adoption Stage Questionnaire to determine your control scope
  • Establish an AI use case inventory covering all production models (required by NIST GOVERN and FS AI RMF)
  • Confirm your materiality tiering for existing quantitative models under SR 26-02

Days 31–60: Validation and testing gaps

  • Map existing model validation processes against SR 26-02’s materiality framework — identify models that can shift to lighter governance
  • Add TEVV documentation templates for any GenAI systems not covered by SR 26-02 (NIST MEASURE / FS AI RMF)
  • Implement bias monitoring controls for any AI used in credit, pricing, or customer service decisions (FS AI RMF Data Integrity domain)

Days 61–90: Third-party AI and consumer exposure

  • Run third-party AI vendor questionnaires for any vendor-supplied AI tools (FS AI RMF Third-Party AI Risk domain)
  • Implement adverse action documentation for AI-driven consumer decisions (FS AI RMF Consumer Protection)
  • Establish GenAI incident response procedures referencing NIST AI RMF MANAGE function

So What?

SR 26-02 represents regulators acknowledging that the 2011 model risk framework needed updating. But the GenAI exclusion is a significant admission: the agencies don’t yet know how to govern these systems in a banking context, and they’re not ready to commit to specific requirements.

That creates both a risk and an opportunity. The risk: if you wait for binding GenAI-specific guidance before building governance, you’ll be retrofitting controls after incidents or exam findings surface. The opportunity: institutions that build principled GenAI governance now — anchored to NIST AI RMF and operationalized through the FS AI RMF’s 230 control objectives — will be ahead of the regulatory curve when that guidance finally arrives.

For programs built around the NIST AI RMF’s four functions, this moment is actually validation: the architecture holds whether your models are traditional quantitative tools, LLMs, or autonomous agents. The crosswalk to SR 26-02 and FS AI RMF fills in the operational details.

If you haven’t read the FS AI RMF deep dive on all 230 control objectives or the NIST AI RMF MANAGE function post, those are the logical next reads. The crosswalk above shows you how the pieces fit. The individual framework deep dives show you how to implement each one.


Need a structured starting point for AI model governance? The AI Risk Assessment Template & Guide includes an AI use case inventory, pre-deployment checklist, and third-party AI vendor questionnaire designed for financial services teams navigating SR 26-02, NIST AI RMF, and FS AI RMF alignment.

Frequently Asked Questions

How does NIST AI RMF relate to SR 26-02?
SR 26-02 (which replaced SR 11-7 on April 17, 2026) covers traditional model risk management for banks over $30B and explicitly excludes generative and agentic AI. NIST AI RMF provides the governance architecture for ALL AI systems — making it the framework you turn to where SR 26-02 has gaps.
Does SR 26-02 replace NIST AI RMF for financial services?
No. SR 26-02 governs model development, validation, and monitoring for traditional quantitative models at large banks. NIST AI RMF covers AI risk governance broadly — including generative AI, agentic AI, and any institution size. They complement each other; neither replaces the other.
What is the Treasury FS AI RMF?
The Financial Services AI Risk Management Framework (FS AI RMF), released March 1, 2026 by the U.S. Treasury and Cyber Risk Institute, translates NIST AI RMF's four functions into 230 operational control objectives specific to financial services.
What models does SR 26-02 actually cover?
SR 26-02 covers complex quantitative methods using statistical, economic, or financial theories. It explicitly excludes generative AI, agentic AI, simple rule-based systems, and deterministic calculations. Traditional ML credit models, fraud detection algorithms, and stress testing models fall within scope.
Where does the FS AI RMF fit relative to NIST AI RMF and SR 26-02?
The FS AI RMF is a financial-services-specific implementation layer on top of NIST AI RMF. It adds 230 control objectives that operationalize NIST's principles for banking — including fair lending controls, SR 26-02 alignment, and third-party AI vendor requirements not covered in either NIST or SR 26-02 alone.
Is NIST AI RMF mandatory for financial services firms?
NIST AI RMF is voluntary, but the Treasury FS AI RMF — which directly maps NIST's four functions to banking — is already being used by examiners as a reference for AI governance expectations. Institutions that can't demonstrate alignment will increasingly face scrutiny.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

AI Risk Assessment Template & Guide

Comprehensive AI model governance and risk assessment templates for financial services teams.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.