SR 26-2
What is SR 26-2
SR 26-2 is the Revised Guidance on Model Risk Management - the new baseline framework governing how banking organizations identify, validate, and oversee the quantitative models they rely on for credit decisions, regulatory reporting, capital calculations, stress testing, and more, issued jointly on April 17, 2026, by the Federal Reserve, the OCC, and the FDIC. It replaces SR 11-7 and sets the standard for how banking organizations identify, validate, monitor, and govern quantitative models.
The guidance introduces three key shifts:
- Narrower model definition: tighter criteria for what qualifies as a model
- Materiality-based oversight: validation intensity tied to model risk, not fixed annual cycles
- Explicit GenAI carveout: generative and agentic AI require separate governance frameworks and are placed outside the scope of this guidance; institutions apply their existing risk management practices to govern them
Why SR 26-2 matters for enterprise AI
Financial institutions rely on models across high-stakes decisions — credit underwriting, fraud detection, capital planning, and BSA/AML compliance. When those models fail, the consequences are financial, regulatory, or both.
Model materiality, determined by the combination of a model's financial exposure and the consequential weight of the decisions it drives, is the concept SR 26-2 uses to set proportional oversight requirements; it is defined in full in the Model materiality and tiered oversight section below. This proportional oversight lets institutions concentrate resources where risk is highest and stop applying uniform validation effort to every tool in the enterprise.
For AI and risk teams, the practical implication is a shift from calendar-driven reviews to a continuous, risk-informed discipline.
Who SR 26-2 applies to
SR 26-2 is expected to be most relevant to banking organizations with more than $30 billion in total assets regulated by the Federal Reserve, the OCC, or the FDIC.
Smaller institutions may still reference it when:
- Model usage is unusually complex or high-risk
- They operate capital markets, securitization, or derivatives businesses
- They have a significant BSA/AML model infrastructure
Although SR 26-2 is explicitly non-binding, it remains the benchmark that regulators and auditors use to assess sound practice. Non-bank fintechs frequently adopt it voluntarily for institutional credibility and regulatory readiness.
What counts as a model under SR 26-2
SR 26-2 tightened the definition of "model." All three criteria must be met:
Criterion
Description
Complex quantitative method
Not a simple formula or lookup table
Theoretical underpinning
Applies statistical, economic, or financial theory
Quantitative output
Produces a number, score, probability, or valuation
Explicitly excluded:
- Simple arithmetic calculations
- Deterministic rule-based processes
- Spreadsheets without embedded statistical methods
- Software without a theoretical underpinning
Getting this classification right is the most consequential scoping decision an MRM team makes: miss a model and the institution carries unsurfaced risk; classify a spreadsheet as a model and the institution wastes validation effort.
Key differences: SR 11-7 vs. SR 26-2
What changed
SR 11-7
SR 26-2
Scope
All supervised institutions
Most relevant to banks with over $30B in total assets (smaller banks with significant model-risk exposure may also be in scope)
Model definition
Broad — captured most quantitative tools
Tighter — requires all three prongs
Oversight cadence
De facto annual review cycle
Risk-based, tied to model materiality
Binding character
Directive language throughout
Explicitly non-binding
Validator independence
Organizational separation preferred
Rigor and objectivity over org chart
GenAI/agentic AI
Not addressed
Explicitly carved out — separate governance required
BSA/AML models
Governed separately under SR 21-8
Absorbed into the SR 26-2 framework
What SR 26-2 says about generative AI and agentic AI
SR 26-2 places generative AI and agentic AI models outside the scope of the guidance. The letter refers to them as 'models' but states that because they are novel and rapidly evolving, they are not within its scope. These systems are treated as novel and rapidly evolving, outside the scope of traditional MRM controls.
This does not mean they are ungoverned. The guidance directs institutions to apply their existing risk management and governance practices to determine appropriate controls for systems not covered. The principles in SR 26-2 still apply to traditional statistical and quantitative models, and to non-generative, non-agentic AI models.
The coverage gap is the governance exposure that results when GenAI and agentic systems are excluded from traditional MRM controls, but no parallel framework has been established to govern them, particularly for customer-facing or decision-driving applications where output errors carry real financial or reputational risk.
A sound parallel framework typically includes:
- Separate system inventory: catalog GenAI and agentic systems with clear ownership
- Risk tiering: assess exposure and decision impact as you would for traditional models
- Prompt and input governance: controls over what goes into the system
- Output validation: hallucination monitoring, factuality checks, guardrails
- Red-team testing: adversarial probing before and after deployment
- Human-in-the-loop controls: mandatory for high-risk or irreversible actions
Traditional machine learning models, classifiers, gradient-boosted models, and neural networks used for credit or fraud, remain fully in scope under SR 26-2.
Looking ahead: the agencies have signaled they plan to issue a RFI on model risk management, including banks' use of AI, generative AI, and agentic AI, in the near future. Institutions building GenAI governance now should expect the regulatory posture to evolve.
The three pillars of model validation
SR 26-2 retains the three validation components from SR 11-7:
Conceptual soundness
Reviews model design, assumptions, data selection, and development approach. Interpretability analysis and benchmarking against alternative methods are recognized tools.
Outcomes analysis
Compares model outputs to real-world results using back-testing, outlier analysis, and prediction-versus-actual comparisons. Method is left to the institution's discretion.
Ongoing monitoring
Continuously evaluates model performance as conditions, data, and exposures evolve. Cadence is risk-based, not defaulted to annual.
Model materiality and tiered oversight
SR 26-2 defines model risk across four drivers: inherent risk, exposure, purpose, and use.
Inherent risk: complexity, number of assumptions, data quality, and interpretability. A deep learning ensemble trained on sparse data carries more inherent risk than a well-documented logistic regression model.
Model exposure: the financial footprint. A model driving $10 billion in credit decisions carries greater exposure than one that informs an internal propensity score.
Model purpose: the weight of the decision. Models supporting regulatory reporting, capital adequacy, or fair-lending determinations are treated as having a higher purpose than those used for internal research.
Model use: even a sound model can exhibit high risk when misapplied or used outside its design. Use is a distinct driver of model risk, separate from the model's internal characteristics.
Exposure + purpose = materiality, which determines:
- Validation depth
- Monitoring frequency
- Governance sign-off requirements
Material models get full validation cycles and senior oversight. Immaterial models may receive lightweight tracking with escalation triggers if use expands.
How governance works under SR 26-2
SR 26-2 trimmed prescriptive governance detail substantially compared to SR 11-7, but reinforced four core expectations:
- Board-level accountability for model risk
- Senior management ownership of the MRM program
- Comprehensive model inventory across the enterprise
- Documentation sufficient to support continuity of operations and remediation tracking
Two important shifts from SR 11-7:
Effective challenge: the principle that models must be critically reviewed by experts with the independence, expertise, and organizational standing to drive change — is preserved from SR 11-7 but redefined: SR 26-2 makes clear it is a function of review quality, not of where the validator sits in the org chart.
Internal audit as oversight, not duplication: Audit evaluates whether the MRM program is rigorous and effective, not re-running validation work.
BSA/AML models (transaction monitoring, sanctions screening, CDD) are now inventoried and tiered alongside all other models under SR 26-2.
FAQ
What is SR 26-2 in simple terms?
SR 26-2 is updated U.S. guidance for managing model risk. It defines how banks should validate, monitor, and govern quantitative models, using a more targeted, materiality-based approach than its predecessor, SR 11-7.
Is SR 26-2 mandatory?
No. SR 26-2 is explicitly non-binding. Non-compliance alone will not trigger supervisory criticism. However, regulators may still act when weak MRM practices lead to unsafe or unsound outcomes. Examiners and auditors treat it as the standard for sound practice. The agencies' authority to act on unsafe or unsound practices is grounded in 12 CFR Part 4 Subpart F Appendix A (OCC), 12 CFR Part 262 Appendix A (Federal Reserve), and 12 CFR Part 302 Appendix A (FDIC).
Does SR 26-2 apply to my institution?
It is expected to be most relevant to banks with more than $30 billion in total assets regulated by the Federal Reserve, the OCC, or the FDIC.
How does SR 26-2 treat generative AI?
GenAI and agentic AI are placed outside the scope of SR 26-2. The guidance refers to them as models but states that because they are novel and rapidly evolving, they are not within its scope. Institutions are expected to apply their existing risk management and governance practices to govern them. The principles in SR 26-2 still apply to non-generative, non-agentic AI models.
What is an effective challenge under SR 26-2?
Effective challenge means models are reviewed by experts with the independence, expertise, and organizational standing to question assumptions and drive improvements. SR 26-2 emphasizes the quality of that challenge over strict structural separation from model developers.
What is aggregate model risk?
Aggregate model risk is the risk arising from dependencies across multiple models — shared data sources, shared assumptions, and methodologies. SR 26-2 elevates this to a first-class governance concept. Institutions are expected to map model dependencies and ensure that a defect in one model doesn't propagate silently across a portfolio of decisions.
How does SR 26-2 change model monitoring?
Monitoring frequency is no longer fixed. It is based on model materiality, change velocity, and available performance data. Material models may warrant more frequent review; immaterial models can operate on a trigger-only basis.
What are the main differences between SR 11-7 and SR 26-2?
SR 26-2 builds on SR 11-7 but updates it in six meaningful ways:
- Scope: SR 11-7 applied broadly to all supervised institutions. SR 26-2 focuses on banks with more than $30 billion in assets; smaller institutions are generally out of scope unless their model footprint is unusually complex.
- Model definition: SR 11-7 captured most quantitative tools, including many spreadsheets and expert-judgment-driven approaches. SR 26-2 requires all three prongs: complex quantitative method, theoretical underpinning, and quantitative output. Simple arithmetic, deterministic rule engines, and software without statistical or economic theory are excluded.
- Oversight cadence: SR 11-7's de facto annual review cycle is replaced by a risk-based approach tied to model materiality, change velocity, and data availability.
- Binding character: SR 11-7 used directive language throughout. SR 26-2 is explicitly non-binding, though sound practice expectations and supervisory risk remain.
- Validator independence: SR 11-7 strongly preferred organizational separation of validation from development. SR 26-2 decouples validation quality from reporting structure; rigor and objectivity matter more than where the validator sits on the org chart.
- GenAI and agentic AI: SR 11-7 predated these systems entirely. SR 26-2 explicitly names generative and agentic AI as out of scope of the guidance and directs institutions to apply existing risk management practices to govern them. The principles in SR 26-2 still apply to non-generative, non-agentic AI models.
Summary