AI Risk

The Treasury's New AI Risk Framework Has 230 Control Objectives. Here's Where to Start.

March 18, 2026 Rebecca Leung
AI riskFS AI RMFTreasury

TL;DR

  • The U.S. Treasury released the Financial Services AI Risk Management Framework (FS AI RMF) on March 1, 2026 — the most operationally specific federal AI guidance for financial services to date.
  • It includes 230 control objectives covering governance, data, model development, validation, monitoring, third-party risk, and consumer protection.
  • Most teams don’t need all 230 on day one. Start with the AI Adoption Stage Questionnaire to right-size your scope, then focus on the controls that map to your actual AI use cases.

230 Controls Sounds Like a Lot. It Is.

The Treasury Department dropped the FS AI RMF on March 1, and the compliance world has been quietly processing it ever since. Developed with over 100 financial institutions through the Cyber Risk Institute and the Financial Services Sector Coordinating Council, this isn’t another set of principles. It’s an operational framework with a 230-line risk and control matrix, a 400+ page control objective reference guide, and enough implementation detail to keep your second-line team busy for the rest of the year.

The framework adapts the NIST AI RMF — which was intentionally generic — into something purpose-built for regulated financial services. That matters because your examiners aren’t grading you against a generic AI governance checklist. They’re looking for controls that fit the regulatory environment you actually operate in.

But here’s the thing: 230 control objectives can feel paralyzing. Especially if you’re a team of one or two trying to stand up AI governance alongside everything else on your plate. The trick is knowing where to start.

What the Framework Actually Contains

The FS AI RMF ships in four pieces:

  • AI Adoption Stage Questionnaire — A maturity self-assessment that calibrates which controls apply to your organization based on how far along you are in deploying AI.
  • Risk and Control Matrix (RCM) — The core: 230 control objectives mapped across governance, data integrity, model lifecycle, validation, monitoring, third-party AI risk, and consumer protection.
  • Guidebook — Implementation guidance for how controls should be operationalized (not just what they are).
  • Control Objective Reference Guide — Over 400 pages of evidence examples. This is what your audit team will reference during exams.

Treasury also released an AI Lexicon alongside the framework to standardize terminology. When your compliance team and your AI vendor define “model explainability” differently, governance falls apart. The Lexicon exists to prevent that.

Start With the Adoption Stage Questionnaire

Before touching the 230-line matrix, take the AI Adoption Stage Questionnaire. It buckets your organization into adoption stages and tells you which controls are relevant now versus which ones you can phase in later.

This is critical for smaller institutions and fintechs. A Series B fintech using one AI-powered fraud detection vendor has a very different control surface than JPMorgan Chase. The framework acknowledges this — but only if you use the questionnaire to scope properly.

Use this table to get a quick read on where you likely land before you even open the questionnaire:

StageWho You AreTypical AI FootprintControl Priority
ExploratoryEarly fintech, community bank evaluating AIConsidering or piloting 1–2 AI tools; no models in productionGovernance basics, inventory, vendor due diligence
Limited DeploymentSeries A–B fintech, mid-size credit union1–3 AI tools live; limited internal model developmentGovernance structure, data quality, TPRM, adverse action
Moderate DeploymentRegional bank, payments company, insurerMultiple AI tools + some internal models; some second-line oversightFull model lifecycle, bias monitoring, validation, consumer protection
Advanced DeploymentLarge bank, G-SIB, major fintechSignificant internal model development; dedicated model risk functionAll 230 controls; continuous monitoring; systemic risk considerations

Skip this step and you’ll drown in controls that don’t apply to you yet. The questionnaire is the fastest thing you can do this week — block 2 hours, answer honestly, and you’ll have a scoped control universe instead of a 230-item to-do list.

The Five Domains That Matter Most (For Most Teams)

After scoping, focus your first pass on the control domains most likely to come up in examinations.

1. AI Governance and Accountability

The framework expects clear ownership — not just a policy document that says “the board oversees AI.” It wants named roles, defined escalation paths, and documented governance decisions. Which means someone in your organization needs to own AI risk before anything else gets done.

Who typically owns this:

  • Large banks / regional banks: CRO or a dedicated Model Risk Management (MRM) function. At institutions with mature MRM programs (OCC SR 11-7 vintage), AI governance often extends from the existing model risk framework.
  • Mid-size banks and credit unions: Chief Risk Officer or VP of Compliance, often with a secondary owner in IT/Engineering.
  • Fintechs (Series A–B): Usually falls to the Head of Compliance or VP of Risk — or whoever wore both hats before the compliance hire. At fintechs without a dedicated CRO, this is the first-compliance-hire’s problem by default.
  • Fintechs with a mature risk function: Increasingly seeing a dedicated AI Risk Lead or Head of Model Risk, reporting to either the CRO or CPO.

The FS AI RMF doesn’t mandate a specific org structure. But it does expect documented accountability. If you’re early-stage and one person owns everything, say so explicitly — a RACI with one name still beats “the team is collectively responsible.”

2. Data Integrity and Bias Monitoring

Garbage in, garbage out applies doubly to AI. The RCM includes controls around data lineage, quality validation, and bias detection — especially for models making decisions about consumers. If your AI touches lending, pricing, or servicing decisions, these controls aren’t optional.

Here’s what happens when you skip bias monitoring:

In 2023, the CFPB and DOJ filed a landmark action against a major mortgage lender for discriminatory lending patterns driven in part by algorithmic underwriting. The model had never been tested for disparate impact across protected classes. The settlement: $31 million in relief and mandated third-party audits of their AI models going forward.

In 2022, HUD investigated Apple Card’s credit limit algorithm after widespread reports of women receiving significantly lower limits than their husbands — sometimes by a factor of 10x — despite identical financials. Goldman Sachs (the issuing bank) faced years of scrutiny, and the episode became the template for how the CFPB now thinks about AI-driven consumer credit decisions.

The risk isn’t theoretical. The CFPB has been explicit: AI-powered credit decisions are subject to the Equal Credit Opportunity Act and the Fair Housing Act. A model that produces discriminatory outcomes is a discriminatory model — regardless of intent. And “we didn’t know the model was biased” is not a defense.

What actual bias monitoring looks like in practice:

  • Run demographic parity and equalized odds tests before production deployment
  • Establish a quarterly adverse action analysis — are denial rates consistent across protected classes?
  • Maintain data lineage documentation so you can trace a model’s inputs back to their source
  • Create a data quality checklist that gates model training data before use

3. Third-Party AI Risk

Using an AI vendor? The framework still holds you accountable for their controls. The third-party risk section maps directly to existing OCC and FDIC vendor management guidance, but adds AI-specific expectations: model transparency, explainability requirements, and ongoing monitoring obligations.

Why this matters more than standard vendor risk:

Standard vendor risk asks: Is the vendor financially stable? Do they have a SOC 2? AI vendor risk asks: Can you explain why the model denied this customer? What happens if the model drifts? Who validates the outputs?

Two scenarios where this bites organizations hard:

Scenario 1: The vendor doesn’t explain adverse actions. A bank uses a third-party AI model to approve credit card applications. A customer is denied. The CFPB expects an adverse action notice with specific reasons. The vendor says the model is proprietary and they can’t provide feature-level explanations. The bank is left holding an unexplainable denial and a potential ECOA violation. Their model opacity is your compliance problem.

Scenario 2: Silent model drift. A payroll processor’s AI model calculates overtime correctly — until a labor law changes in three states and the model doesn’t update. The bank that white-labeled the payroll service faces regulatory questions about systematic underpayment affecting 15,000 workers. The processor claims the bank should have been monitoring outputs. The bank assumed the vendor had it covered.

This is where most fintechs have the biggest gap. Buying an AI-powered tool doesn’t transfer the risk — it means you need to validate someone else’s controls. Practically: build AI-specific provisions into vendor contracts, require model performance reports quarterly, and include explainability requirements as a contractual term before signing.

4. Model Lifecycle Management

Development, testing, deployment, monitoring, retirement. The FS AI RMF expects documented controls at every stage. For teams that inherited AI models from engineering without any risk documentation, this section is your remediation roadmap.

Specific controls the framework expects (and examiners will ask about):

  • Version control: Every model iteration tagged and tracked. If you can’t tell me which version of the model made this decision 18 months ago, you have a control gap.
  • Validation sign-off: No model goes to production without documented second-line validation. This doesn’t have to be a 60-page report — a sign-off checklist with key test results is enough at early stages.
  • Drift detection thresholds: Set explicit thresholds (e.g., ±5% on key performance metrics) that trigger model review. Automated alerts are better than quarterly manual checks.
  • Quarterly performance reviews: Schedule recurring reviews of live model outputs against original validation benchmarks. Document the findings.
  • Retirement protocols: Decommissioned models should have documented end-of-life procedures, including data retention requirements and handoff to any successor model.
  • Champion-challenger testing: When updating a model, run the new version against the current version on live data before full deployment. Document the comparison.

If you inherited models from engineering without documentation, your fastest path is a retroactive model inventory: list every AI tool in production, assign a risk tier, and prioritize documentation remediation from highest-risk down.

5. Consumer Protection

If AI drives any consumer-facing decision — credit approvals, pricing, claims processing, customer service — the framework includes specific controls around explainability, adverse action notices, and fairness testing.

What AI consumer harm actually looks like:

  • Adverse action without explanation: A credit union’s AI-powered loan decisioning tool denies an applicant with no explanation beyond “system decision.” ECOA requires specific reasons. “The algorithm said no” is not a reason.
  • Dynamic pricing discrimination: An insurance carrier’s AI quotes premiums based on behavioral data that correlates heavily with ZIP code — a proxy for race. Even if ZIP code isn’t an explicit input, the CFPB can find disparate impact. Several state insurance commissioners opened investigations in 2024 over exactly this.
  • AI-driven collections harassment: A debt collection servicer used an AI to optimize contact cadence. The model learned that calling certain segments 8–10 times per day produced better collections outcomes. It also produced a wave of TCPA complaints and a $4 million CFPB action.
  • Chatbot misadvice: A fintech’s AI customer service bot gave incorrect information about Reg E dispute rights. Customers who relied on it missed deadlines. The CFPB’s view: the institution is responsible for what its chatbot says, even if it’s automated.

The consumer protection controls in the FS AI RMF aren’t separate from your existing UDAP and fair lending obligations — they’re the operational implementation of those obligations in an AI context.

What This Means for Exams

The FS AI RMF isn’t law. But it’s the most detailed operational guidance the federal government has produced on AI in financial services, and it was built with examiner expectations in mind. The 400-page control objective reference guide literally includes evidence examples — the kind of documentation examiners expect to see.

State regulators are already looking to frameworks like this to benchmark emerging best practices. The CFPB, OCC, and FDIC have all signaled that AI governance is on their exam priority list for 2026. Building your AI governance program against the FS AI RMF now means fewer surprises in your next exam.

Your 30/60/90/120-Day Implementation Roadmap

This isn’t a three-bullet “phase approach.” Here’s what a real implementation looks like:

Days 1–30: Know What You Have

Responsible party: Head of Compliance / Chief Risk Officer / First Compliance Hire

  • Complete the AI Adoption Stage Questionnaire — this determines your control scope
  • Build an AI inventory: every AI tool, model, and vendor in production or under evaluation. Include the vendor name, use case, data inputs, and whether it touches consumer decisions
  • Assign risk tiers to each: High (consumer credit/decisioning), Medium (internal operations with some regulatory exposure), Low (internal productivity tools)
  • Identify who currently “owns” each AI tool — if the answer is “no one,” that’s your first finding
  • Pull your existing vendor contracts for AI tools and flag gaps in explainability, performance reporting, and model change notification

Milestone: Completed inventory with risk tiers; documented ownership assignments; initial gap list.


Days 31–60: Governance and Policy Infrastructure

Responsible party: CRO / Head of Compliance + Legal

  • Draft or update your AI Governance Policy — at minimum, scope, ownership, risk appetite, and escalation paths
  • Establish an AI Risk Committee or assign AI governance to an existing risk committee with explicit mandate
  • Build a model risk policy if you don’t have one — even a 3-page policy beats nothing for exam purposes
  • For your top 3–5 highest-risk AI use cases: pull the relevant FS AI RMF control objectives and run a gap assessment against current state
  • Begin vendor remediation conversations: for any AI vendor used in consumer-facing decisions, request model performance reports and explainability documentation

Milestone: AI governance policy drafted; top-5 gap assessment complete; vendor outreach initiated.


Days 61–90: Controls in Flight

Responsible party: Model Risk / Compliance + Engineering

  • Stand up validation sign-off process for any model currently in production without documented validation — even a retroactive sign-off with current performance data is better than none
  • Implement drift detection on your highest-risk models — set automated thresholds and assign ownership of alerts
  • Build or update adverse action notice procedures for any AI-driven credit, pricing, or servicing decisions
  • Run a bias/fairness test on your highest-risk consumer-facing model — document results even if the news isn’t great (demonstrating awareness is better than claiming ignorance)
  • Document your model inventory in a format suitable for examiner review — tool, version, inputs, outputs, validation status, owner

Milestone: Live models with documented validation; drift monitoring live for top risks; adverse action procedures updated.


Days 91–120: Monitoring, Evidence, and Ongoing Cadence

Responsible party: Second-Line Risk / Compliance

  • Schedule quarterly model performance reviews — calendar them now with named attendees and a documentation template
  • Build an AI risk reporting package for your risk committee or board — even a one-page dashboard beats verbal updates
  • Conduct your first annual AI risk assessment using the FS AI RMF as your benchmark — this becomes your baseline for future exams
  • Update your TPRM program to include AI-specific due diligence questions for all vendors with AI-powered tools
  • Confirm your model retirement procedures — what happens when a model is decommissioned?

Milestone: Quarterly cadence established; AI risk assessment complete; TPRM updated; board-level reporting live.


So What?

The FS AI RMF is the clearest signal yet that AI governance in financial services is moving from “nice to have” to “show me the controls.” The 230 control objectives are comprehensive, but they’re not meant to be implemented all at once. Scope it, prioritize it, and build incrementally.

The institutions that start now — even with a small team and a focused scope — will be the ones that walk into their next exam with confidence instead of scrambling to catch up. Download the framework at cyberriskinstitute.org, take the questionnaire, and start with what actually applies to you.

Need a head start? The AI Risk Assessment Template & Guide gives you a ready-made framework for assessing AI risks across your organization, with built-in mapping to NIST AI RMF principles. If you’re also managing vendor AI risk, the Third-Party Risk Management Kit includes AI-specific vendor due diligence questions you can drop into your next contract review.

FAQ

Is the FS AI RMF mandatory?

Not yet. It’s guidance, not regulation. But it was developed with examiner expectations in mind, and state regulators are using frameworks like this to set supervisory benchmarks. The CFPB, OCC, and FDIC have all signaled AI governance is a 2026 exam priority. Treating it as optional is a gamble most risk teams shouldn’t take.

Do all 230 control objectives apply to every institution?

No. The AI Adoption Stage Questionnaire helps you scope which controls are relevant based on your organization’s AI maturity and deployment level. A fintech with one AI vendor has different requirements than a G-SIB running proprietary models. Use the table above to estimate your stage before you even open the questionnaire.

How does the FS AI RMF relate to the NIST AI RMF?

The FS AI RMF builds on and adapts the NIST AI RMF specifically for financial services. It takes NIST’s broad principles and translates them into operational controls tailored to the regulatory, supervisory, and risk management realities of banking, payments, and financial services. Think of it as NIST AI RMF + OCC SR letters + CFPB fair lending guidance, rolled into one framework.

What if I don’t have a model risk management function?

You’re not alone — most Series A–B fintechs don’t. Start with ownership (someone needs to be accountable), an inventory (know what AI you have), and your highest-risk use case (anything touching consumer credit decisions gets the most scrutiny). You don’t need a full MRM program on day one. You need a documented starting point.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.