The competitive advantage: Rethinking AI model risk management in insurance

Mike Upchurch2025-05-22 | 8 min read

Return to blog home

In a rapidly evolving AI landscape, insurance companies face a dual challenge: accelerating AI adoption while navigating an increasingly complex regulatory environment. In a recent webinar, I had the privilege of discussing these challenges with two industry experts: Joe Breeden, President of the Model Risk Managers International Association, and Ahmet Gyger, Senior Director of Product Management at Domino Data Lab.

The changing face of model risk management

Insurance companies are caught in a paradox. On one hand, competitive pressures demand faster model development and deployment. On the other hand, regulatory scrutiny is intensifying — from NAIC's model governance principles to the EU AI Act and beyond.

"In 2023, there were something like 91 regulations proposed for AI regulation by various bodies. In 2024, there were over 700," I noted during our discussion, highlighting the exponential growth in regulatory attention.

Ahmet emphasized the challenges this creates for organizations: "I've been talking with many organizations that have a lot of silos in how they define policies; having silos means that these policies are a bit harder to make cohesive across the whole organization, and there is a lot of friction."

This environment requires a fundamental shift in how insurers approach model governance. Traditional approaches, characterized by siloed teams and sequential processes, simply cannot keep pace.

From serial to parallel: Reimagining the model lifecycle

One of the most compelling insights from our discussion was the need to move from serial to parallel workflows in model development and governance. As one customer recently told us, "I can build a model in a month. It takes eleven months to put it in production."

Ahmet advocated for a more integrated approach: "Moving away from serialization and towards a more parallel workflow where we bring all stakeholders early on in the lifecycle, at the ideation level. We should have people from the ethical team, from the cybersecurity team, from the legal team already involved in the decision making."

Rather than treating governance as a final checkpoint before deployment, leading organizations are embedding governance throughout the model lifecycle.

Joe went even further, suggesting that model risk management should actually precede model development: "We suggest measuring and understanding human performance before the generative AI gets installed so that we can develop benchmarks, and then as we deploy AI, we can monitor relative performance."

Managing the unknown: Beyond traditional validation

Traditional model risk practices focus on known risks identified during development and validation. But with generative AI and more complex models, the landscape has fundamentally changed.

"With generative AI models, there is no notion of bounding the universe, because the moment you deploy day one, someone can show up and give a query and input that you've never tested before," Joe explained. "Validation is not sufficient. It's still required, but you really have to focus on watching how it performs in real time."

Ahmet highlighted how this necessitates a shift in mindset: "Before thinking about deploying, the first thing you think about when you select the model is how am I going to test this model? What are the evaluations that I can use to know that this model is actually behaving in a legitimate manner, and not doing something unexpected?"

This shift requires a new approach to model monitoring — from periodic revalidation (e.g., annual reviews) to continuous, real-time oversight. Insurers that establish these capabilities early gain a significant competitive advantage.

The new operational paradigm: Redundancy planning

Perhaps the most underappreciated aspect of AI governance is contingency planning. When a traditional model fails, companies typically have time to adjust. With customer-facing AI applications, failures require immediate action.

"In the past, we have viewed challenger models as an option. Now not only are they mandatory, but they have to be functioning in parallel, ready to swap over to," Joe emphasized. "We're not talking about changing a model next quarter, we're talking about a scenario where you have to shut down today."

Ahmet added important context about progressive deployment: "If we are talking about use cases where we are supporting humans, you do that progressively over time, and really try to get feedback from the people you are augmenting with this technology."

This requires what Joe called "spinning reserve" — having backup systems (often human teams) ready to take over instantly if an AI system fails. The implications for organizational design and resourcing are significant but necessary.

Turning governance into a competitive advantage

The companies that will win in this new environment will be those that transform governance from a regulatory burden into a strategic asset. This means:

  1. Centralizing governance while preserving flexibility: Ahmet emphasized this point: "It's super important to unify the policy, but it's also important to have flexibility in your policy based on the risk. If the risk is low, maybe I'm not gonna ask a thousand questions to the team of data scientists building that simple model."
  2. Automating the mundane: Implementing systems that automatically track model lineage, document development decisions, and monitor performance metrics.
  3. Adopting an experimentation mindset: Ahmet pointed to valuable lessons from tech companies: "We were doing shadow testing of all our models. Before a model hit live traffic, that model would be getting shadow traffic. We would analyze how that model behaved compared to other models in production, and we could do that really at scale."
  4. Establishing pre-agreed triggers: Joe emphasized defining clear thresholds for model intervention before a crisis occurs: "You have to have your triggers agreed to before the event comes because you don't want in the heat of the moment to be debating if this is a trigger event or not."

As Ahmet noted during our discussion: "What I've seen that works really the best is people who have this mindset of 'I need to view risk management and compliance as a revenue enabler, not as a tax.'"

Looking ahead

Model risk management in the insurance industry is evolving. Those who view AI governance merely as a compliance exercise will struggle with slow innovation cycles and potential regulatory issues. Those who embrace governance as an enabler of responsible innovation will pull ahead.

The future belongs to organizations that can balance speed with safety, innovation with oversight. In the words of Joe Breeden, "It's enabling a new kind of business that you really can't do without."

For insurance leaders, the message is clear: don't view governance as the finish line. Instead, make it the foundation upon which your AI strategy is built — a competitive differentiator that allows you to move faster and more confidently than your peers.

Check out the full on-demand webinar for more insights, and discover how to proactively manage AI governance, streamline compliance, and de-risk AI models — without stifling innovation.


Mike Upchurch is the Vice President of Strategy for Financial Services at Domino Data Lab, bringing over 25 years of expertise in analytics, ML/AI, business strategy, and technology. Previously, Mike held roles at Capital One as a product manager in their innovation lab and as a strategy and operations consultant in their Center for Machine Learning. Mike led strategy at Notch and in the mortgage lending group of Bank of America and was the co-founder of Fuzzy Logix. Prior to that he developed deep hands-on technical experience at The Hunter Group and PwC.