NIST AI RMF for Financial Services: Crosswalk to SR 26-02, OCC 2026-13, and FS AI RMF
Table of Contents
TL;DR:
- On April 17, 2026, SR 26-02 and OCC Bulletin 2026-13 replaced the 15-year SR 11-7 model risk framework — with a major caveat: generative AI and agentic AI are explicitly excluded from scope.
- That gap is filled by NIST AI RMF (the governance architecture) and the Treasury FS AI RMF (230 financial-services-specific control objectives released March 1, 2026).
- Understanding how all three frameworks overlap — and where they diverge — is now the core literacy requirement for any financial services AI risk program.
- This post provides the crosswalk: what each covers, where they align, and how to build a program that satisfies all three.
One week ago, the Federal Reserve, OCC, and FDIC jointly retired the model risk management framework that has governed bank model governance for 15 years. SR 26-02 (Federal Reserve) and OCC Bulletin 2026-13 replaced SR 11-7 and OCC 2011-12 on April 17, 2026. For a field that’s been running on the same framework since 2011, this was overdue.
But there’s a catch that matters for anyone building an AI risk program right now: SR 26-02 explicitly excludes generative AI and agentic AI from its scope, calling them “novel and rapidly evolving.” The agencies intend to issue separate guidance “in the near future.” In the meantime, you’re operating in a three-framework environment:
- SR 26-02 / OCC 2026-13 — updated model risk for traditional ML and quantitative models at banks >$30B
- NIST AI RMF 1.0 — sector-agnostic governance architecture for all AI, including GenAI
- Treasury FS AI RMF — 230 financial-services-specific control objectives that translate NIST AI RMF into operational requirements
If your program only touches one of these, it has gaps. Here’s how they fit together.
Framework #1: SR 26-02 and OCC Bulletin 2026-13
What it covers
SR 26-02 applies to traditional quantitative models — the kind that have lived in financial institutions for decades: credit scoring, stress testing, fraud detection algorithms, DFAST models, VaR calculations. The revised definition is deliberate: “complex quantitative methods using statistical, economic, or financial theories.” Simple rule-based systems, deterministic calculations, and — critically — LLMs and agentic systems are excluded.
The framework primarily targets banking organizations with over $30 billion in total assets, though examiners may apply the principles to smaller institutions with concentrated model risk exposure.
What changed from SR 11-7
SR 11-7 applied uniform validation rigor across all models regardless of their actual risk to the institution. SR 26-02 introduces a materiality construct that changes the calculation:
| Dimension | SR 11-7 | SR 26-02 |
|---|---|---|
| Governance approach | Uniform rigor across all models | Risk-based, tiered by materiality |
| Materiality assessment | Implicit, examiner judgment | Explicit: model exposure × model purpose |
| Validation frequency | Periodic, often annual | Calibrated to model complexity and change |
| Monitoring | Primarily post-deployment reviews | Continuous monitoring emphasized |
| Non-compliance | Supervisory criticism exposure | No criticism for guidance alone; underlying risks still trigger safety/soundness |
| Vendor models | Covered | Greater emphasis on ongoing monitoring |
| Generative AI | Not addressed | Explicitly excluded |
The materiality construct combines two factors: model exposure (how significant is this model’s output to business decisions?) and model purpose (does it support regulatory requirements or risk management?). High-materiality models still require rigorous, independent validation. Low-materiality models can leverage automated monitoring and lighter governance without the same level of examiner scrutiny.
This is a meaningful operational shift. A low-materiality internal forecasting tool no longer needs the same validation infrastructure as a fair lending credit model — which is how it should work.
What SR 26-02 doesn’t solve
The GenAI exclusion is a deliberate regulatory acknowledgment that the old framework isn’t equipped for these systems. Agencies stated they will issue separate guidance — but haven’t committed to a timeline. In the interim, if your organization is deploying LLMs, chatbots, or AI agents alongside traditional quantitative models, SR 26-02 tells you nothing about how to govern them.
Framework #2: NIST AI RMF 1.0
The sector-agnostic backbone
Published by NIST in January 2023, NIST AI 100-1 provides a voluntary framework organized around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Unlike SR 26-02, it has no asset threshold, no model type exclusion, and no regulatory teeth — but it’s become the de facto architecture that financial services regulators point to when SR 26-02 falls short.
NIST AI RMF’s four functions and how they map to financial services
GOVERN sets the foundation: AI risk culture, accountability structures, policies, model inventory, and board-level oversight. In financial services terms, this is your AI governance committee, AI use case registry, and the risk appetite statement that defines which AI applications require pre-deployment review.
MAP establishes context before you build or deploy: who is impacted, what are the foreseeable harms, what’s the deployment environment. In financial services, MAP is the risk framing you do before a new AI model goes to your credit committee or before a vendor AI tool gets onboarded through TPRM.
MEASURE is where testing and evaluation live: TEVV (Test, Evaluation, Validation, and Verification), bias measurement, performance metrics, ongoing monitoring cadences. This is the NIST function most aligned with SR 26-02’s validation requirements — though NIST extends it to GenAI use cases that SR 26-02 doesn’t touch.
MANAGE covers prioritization, treatment, and ongoing risk response: incident escalation, risk acceptance documentation, decommissioning triggers, and monitoring thresholds that trigger review. This maps directly to SR 26-02’s emphasis on continuous monitoring.
How NIST AI RMF fills the GenAI gap
Because NIST AI RMF has no model type exclusions, it’s currently the only federally endorsed governance framework that explicitly addresses LLMs, foundation models, and agentic AI systems. NIST’s companion document, AI 600-1 (the Generative AI Profile), extends the four functions specifically to GenAI risks — including confabulation, harmful bias, information security, and data privacy risks that don’t arise in traditional quantitative models.
For financial services firms deploying GenAI, NIST AI RMF + AI 600-1 is your governance foundation until regulators issue sector-specific guidance.
Framework #3: Treasury Financial Services AI RMF
What the FS AI RMF adds
Released March 1, 2026 by the U.S. Treasury in partnership with the Cyber Risk Institute and developed with over 100 financial institutions, the FS AI RMF is the operational layer that NIST AI RMF doesn’t provide. NIST tells you what functions a governance program should have; FS AI RMF tells you the 230 specific control objectives that financial services organizations should implement.
The framework ships in four components:
| Component | What It Does |
|---|---|
| AI Adoption Stage Questionnaire | Self-assessment to calibrate which controls apply based on AI maturity |
| Risk and Control Matrix (RCM) | 230 control objectives mapped to risk statements across 5 domains |
| Guidebook | Implementation guidance for how controls should be operationalized |
| Control Objective Reference Guide | Detailed reference for each control objective with examples |
The five FS AI RMF domains
The 230 control objectives are organized across five domains that map directly to financial services regulatory expectations:
- AI Governance — Committee structures, AI policies, model inventories, escalation paths
- Data Integrity & Bias Monitoring — Training data quality, fairness testing, disparate impact controls (ECOA/FHA alignment)
- Model Lifecycle Management — Development standards, validation, change management, decommissioning
- Third-Party AI Risk — Vendor due diligence, contractual controls, fourth-party model dependencies
- Consumer Protection — UDAAP controls, explainability for adverse action, complaint handling for AI-driven decisions
The AI Adoption Stage Questionnaire is particularly useful: it lets institutions scope their control obligations based on how deeply AI is embedded in their operations. An institution using AI only for back-office process automation has a fundamentally different risk profile than one using AI for credit decisioning or customer-facing interactions. The questionnaire calibrates the control burden accordingly.
The Three-Framework Crosswalk
Here’s how the three frameworks map to each other across the governance domains that matter most:
| Governance Domain | SR 26-02 / OCC 2026-13 | NIST AI RMF 1.0 | Treasury FS AI RMF |
|---|---|---|---|
| Governance structure | Board oversight, clear roles, independent challenge | GOVERN function: policies, culture, accountability | AI Governance domain: 50+ control objectives |
| Model inventory | Model registry expected | GOVERN: AI system inventory | RCM: AI use case cataloging |
| Risk framing | Use case documentation | MAP: context and impact analysis | Model Lifecycle: pre-deployment framing |
| Validation / TEVV | Independent validation, tiered by materiality | MEASURE: TEVV, bias, metrics | Data Integrity & Model Lifecycle domains |
| Ongoing monitoring | Emphasized; continuous for frequently updated models | MANAGE: monitoring thresholds, incident response | Monitoring controls across all 5 domains |
| GenAI / LLMs | Explicitly excluded | Covered by AI 600-1 GenAI Profile | Covered; includes LLM-specific controls |
| Vendor/third-party AI | Addressed (vendor models) | MAP + MANAGE: third-party dependencies | Third-Party AI Risk domain: dedicated controls |
| Consumer protection | Implicit (fair lending, UDAAP) | GOVERN: impact assessment | Consumer Protection domain: explicit controls |
| Scope | Banks >$30B, traditional models | All AI, all institution sizes | All AI, financial services context |
| Enforceability | Principles-based (no supervisory criticism for guidance alone) | Voluntary | Voluntary; examiner reference |
Building a Program That Covers All Three
Don’t build three separate programs
The temptation when you see three frameworks is to build three compliance programs. Resist it. The frameworks are complementary, not competing. A single AI risk management program — structured around NIST AI RMF’s four functions — naturally satisfies SR 26-02’s requirements for traditional models and FS AI RMF’s operational controls.
Think of it this way:
- NIST AI RMF is your program architecture — the skeleton
- FS AI RMF is your operational control library — the muscles
- SR 26-02 is your examiner expectation for traditional ML — the regulatory floor
Coverage priority by model type
| Model Type | Primary Framework | Secondary Framework |
|---|---|---|
| Credit scoring (ML) | SR 26-02 (validation, governance) | FS AI RMF (bias monitoring, consumer protection) |
| Fraud detection (ML) | SR 26-02 | NIST AI RMF MEASURE (ongoing monitoring) |
| Stress testing / regulatory models | SR 26-02 (highest materiality) | FS AI RMF (documentation) |
| LLM / chatbot (customer-facing) | NIST AI RMF + AI 600-1 | FS AI RMF (consumer protection, UDAAP) |
| Agentic AI (autonomous workflows) | NIST AI RMF + AI 600-1 | FS AI RMF (third-party risk, governance) |
| Vendor-supplied AI tools | SR 26-02 (if >$30B) + FS AI RMF | NIST AI RMF (third-party dependencies) |
The 90-day priority checklist
If you’re starting fresh or rationalizing an existing program against all three frameworks, prioritize in this order:
Days 1–30: Governance baseline
- Complete the FS AI RMF AI Adoption Stage Questionnaire to determine your control scope
- Establish an AI use case inventory covering all production models (required by NIST GOVERN and FS AI RMF)
- Confirm your materiality tiering for existing quantitative models under SR 26-02
Days 31–60: Validation and testing gaps
- Map existing model validation processes against SR 26-02’s materiality framework — identify models that can shift to lighter governance
- Add TEVV documentation templates for any GenAI systems not covered by SR 26-02 (NIST MEASURE / FS AI RMF)
- Implement bias monitoring controls for any AI used in credit, pricing, or customer service decisions (FS AI RMF Data Integrity domain)
Days 61–90: Third-party AI and consumer exposure
- Run third-party AI vendor questionnaires for any vendor-supplied AI tools (FS AI RMF Third-Party AI Risk domain)
- Implement adverse action documentation for AI-driven consumer decisions (FS AI RMF Consumer Protection)
- Establish GenAI incident response procedures referencing NIST AI RMF MANAGE function
So What?
SR 26-02 represents regulators acknowledging that the 2011 model risk framework needed updating. But the GenAI exclusion is a significant admission: the agencies don’t yet know how to govern these systems in a banking context, and they’re not ready to commit to specific requirements.
That creates both a risk and an opportunity. The risk: if you wait for binding GenAI-specific guidance before building governance, you’ll be retrofitting controls after incidents or exam findings surface. The opportunity: institutions that build principled GenAI governance now — anchored to NIST AI RMF and operationalized through the FS AI RMF’s 230 control objectives — will be ahead of the regulatory curve when that guidance finally arrives.
For programs built around the NIST AI RMF’s four functions, this moment is actually validation: the architecture holds whether your models are traditional quantitative tools, LLMs, or autonomous agents. The crosswalk to SR 26-02 and FS AI RMF fills in the operational details.
If you haven’t read the FS AI RMF deep dive on all 230 control objectives or the NIST AI RMF MANAGE function post, those are the logical next reads. The crosswalk above shows you how the pieces fit. The individual framework deep dives show you how to implement each one.
Need a structured starting point for AI model governance? The AI Risk Assessment Template & Guide includes an AI use case inventory, pre-deployment checklist, and third-party AI vendor questionnaire designed for financial services teams navigating SR 26-02, NIST AI RMF, and FS AI RMF alignment.
Related Template
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Frequently Asked Questions
How does NIST AI RMF relate to SR 26-02?
Does SR 26-02 replace NIST AI RMF for financial services?
What is the Treasury FS AI RMF?
What models does SR 26-02 actually cover?
Where does the FS AI RMF fit relative to NIST AI RMF and SR 26-02?
Is NIST AI RMF mandatory for financial services firms?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Keep Reading
GenAI Supply Chain Risk: Third-Party Model Dependencies and NIST AI 600-1 Controls
Most financial institutions using GenAI APIs don't fully own their AI supply chain. NIST AI 600-1 says that's your problem. Here's what you need to control.
Apr 25, 2026
AI RiskDeveloper vs. Deployer vs. Operator: Role-Specific Obligations Under NIST AI 600-1
NIST AI 600-1 assigns different GenAI risk obligations to developers, deployers, and operators. Here's what each role actually owns—and where the gaps live.
Apr 25, 2026
AI RiskGenerative AI Incident Disclosure and Content Provenance: NIST AI 600-1 Requirements
What NIST AI 600-1 requires when your GenAI system fails: incident disclosure obligations, after-action review requirements, and content provenance tracking.
Apr 24, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.