FinanceModel Governance
April 28, 2026 | 8 min read

How to navigate SR 26-2

What SR 26-2 leaves unsaid and how financial institutions can prepare

Return to blog home

The regulator's job is to set expectations. The institution's job is to manage risk. SR 26-2 changed the first of these. It did not change the second. Exam findings still land where the risk is. Model failures still cause losses. Internal audit questions, board risk appetite reviews, conversations with regulators about decisions made years ago – all of these still happen, and the answers still have to be defensible on their own terms.

A principles-based document is worth reading twice: once for what it says, and once for what it leaves unsaid. Everything in the second read is work sitting on the institution's side. Six takes cover where the regulation is silent, and how institutions will need to speak up.

Take 1. The model vs. rules engine line is drawn by code type, not by risk

SR 26-2 excludes deterministic rule-based software from its definition of a model. An ML model that approves credit is in scope. A system making the same decision with hand-coded rules is not. Both produce decisions that can hurt customers and the bank. Whether the code is statistical or deterministic has nothing to do with how much risk the decision carries.

The right governance boundary is drawn by decision risk, not by code taxonomy. An MRM function that stops at the ML line under-scopes its actual exposure. For more on how governance discipline can span the full decision stack, see our whitepaper on Model Risk Management.

Take 2. Aggregate risk got a paragraph, not a framework

Aggregate model risk is the cascading exposure created when models share assumptions, data sources, or methodologies, and SR 26-2 names it without giving institutions a framework to measure it. Ensemble models, chained pipelines, model lineage, cascade effects when an upstream model feeds three downstream ones – none of it gets a framework.

Naming a phenomenon is not the same as giving practitioners a way to reason about it. Where the regulation does not operationalize aggregate risk, institutions have to. That means tracking model-to-model dependency as a live graph rather than a static inventory, something a modern model governance platform handles natively, but a GRC-plus-spreadsheet stack cannot.

Take 3. The spreadsheet carve-out is where errors actually happen

A governance boundary that stops at the model factory no longer matches where errors actually happen. SR 26-2 excludes simple spreadsheet calculations from its model definition, but in most banks, a meaningful share of regulatory filings pass through a spreadsheet at some point. The 2012 JPMorgan VaR model that lost $6.2 billion ran as a chain of Excel workbooks with a formula that divided by a sum instead of an average. Under the new definition, the spreadsheet where the error lived is out of scope.

SR 26-2 does not require institutions to govern spreadsheet-based calculations. Their own risk appetite should.

Take 4. GenAI is out of SR 26-2 scope, not out of governance scope

SR 26-2 carves out generative and agentic AI as novel and rapidly evolving. Out of scope is not the same as ungoverned. Institutions still need to govern GenAI and agentic systems under a parallel framework that SR 26-2 doesn't prescribe.

The practical posture is a parallel bridge track that mirrors SR 26-2 principles where they fit and adds GenAI-specific controls where the regulation is silent. Bridge controls that matter most: prompt governance, output filtering, hallucination monitoring, sensitive-data leakage protection, and human-in-the-loop gating for any use case with material decisions.

Run it in the same inventory, under the same CRO, flagged so the scope boundary is auditable.

Take 5. US governance got lighter while everyone else tightened

Institutions that plan to the strictest applicable standard rather than the most lenient will be better positioned through every regulatory cycle, not just this one. SR 26-2 landed the same year AI governance moved in the opposite direction in most serious jurisdictions. The EU AI Act's high-risk provisions take effect in August 2026 with extraterritorial reach for any institution serving EU customers. Colorado and Texas have AI-specific laws active this year.

SR 26-2 may feel like a lighter load, but tighter rules already exist in other jurisdictions, and more are coming.

Take 6. The burden of proof just moved to the institution

SR 26-2 is structured as principles rather than rules. The practical consequence is that more interpretive work now sits with the institution rather than with the guidance. This is a harder standard, and institutions that rise to it will pull ahead.

The intellectual architecture from SR 11-7 (effective challenge, three validation pillars, conceptual soundness, outcomes analysis, ongoing monitoring) is preserved. What changed is how much of the standard each institution now has to articulate for itself.

If you're leading AI and ML the same regulation reads differently from your seat

SR 26-2 is an opportunity on the deployment side and an imperative on the governance side. Here's how to read it:

  • The deployment window is real. The US federal floor for GenAI is genuinely lighter for the next 12 to 18 months. Use the window to deploy faster on classical ML and to pilot GenAI use cases.
  • The governance bill is coming due. State and international frameworks are not standing still. The institutions that build governance architecture alongside deployment rather than retrofitting after, will absorb whatever comes next rather than having to rebuild.
  • Change the shape of the conversation with your MRM counterpart. Under SR 11-7, that conversation was often procedural. SR 26-2 provides both sides with supervisory cover to work differently, with validators closer to development, governance embedded in the model lifecycle, and evidence generated continuously rather than assembled at the end.

A useful first step: Propose a joint pilot. One GenAI use case run end-to-end through a shared governed platform, with both teams measuring cycle time, evidence quality, and finding rate against the existing process. One working example changes the conversation more than any strategic discussion.

The real advantage

Regulation is not static, internal policy is not static, and technology is moving faster than both. An institution whose governance function can absorb a regulatory change, update policy and evidence collection, and keep the model book running without a six-month rebuild is compounding an advantage over one that cannot.

The institutions that win the next regulatory cycle will be those that treat adaptability as an offense rather than overhead. SR 26-2 is the starting point. What to do about it: the platform, the process architecture, and the framework are covered in Part 3 of this series.

SR 26-2 leaves you with questions. On May 19th at Rev New York, we can discuss them together with David Palmer, author of SR 11-7, and MRM leaders with experience at Capital One, TIAA, and New York Life. Come join the conversation. Register now →

Nicholas Goble
Nicholas Goble

Nicholas Goble, Ph.D. leads Solution Architecture for Financial Services & Insurance at Domino Data Lab, bringing more than ten years of experience across quantitative finance, derivatives modeling, and fintech innovation. At Venerable, Nicholas managed Quantitative Research and Development, where he established quant research capabilities from the ground up and guided teams in building sophisticated trading platforms and pricing engines. Before that, he was a Senior Quantitative Researcher at Chatham Financial, focusing on valuation methodologies and bringing machine learning models into live trading environments. Nicholas holds a Ph.D. in Physics from Case Western Reserve University.

Rev 2026

The enterprise AI event for data science & IT leaders

Join us at Rev, where innovators from leading organizations share how they're driving results across industries.