Data Science Leaders from UK’s Largest Retail Bank Will Discuss Responsible AI at Rev 2

Karina Babcock2019-03-27 | 2 min read

Return to blog home

By Karina Babcock, Director Corporate Marketing, Domino on March 27, 2019 in Perspective

“Explainable AI” has become a popular discussion topic as organizations increasingly deploy a high volume of models into production, and as those models impact important business decisions. It’s critical to understand the underpinnings, evolution, and use of each model to make sure they’re doing what they’re supposed to be doing in the wild, and to be able to effectively identify and troubleshoot if they’ve gone off track.

Domino chief data scientist Josh Poduska has explained, “In the context of a broader model management framework, we refer to the pillar that allows an organization to keep a finger on the pulse of the activity, cost, and impact of each model as ‘model governance.’”

This concept is particularly important when dealing in regulated industries such as financial services. Lloyds Banking Group – the UK’s largest retail bank – is sending their head of data science, Tom Cronin, and Data Scientist Tola Alade to New York City this May to dig into this topic at the Rev 2 Summit for Data Science Leaders.

Tom leads a team of more than 50 “pathfinders and trailblazers” who are delivering AI products to Lloyds colleagues and customers. A member of Tom’s team, Tola delivered the most highly rated session at the recent Data Science Pop-up in London; her personal mission is to “bring mathematics to life” by inspiring younger generations with real life applications of statistical techniques.

Here’s a short synopsis of their Rev talk, titled “Responsible AI”:

As the use of statistical models in decision-making becomes more and more ubiquitous in all areas of life, from which mortgage we can get to who we get to swipe left on Tinder, our ability to interpret these models has never been more critical. In this session, we will introduce you to the concept of explainable AI (XAI), which gives us the ability to explain decisions made by machine learning models.

RELATED TAGS

SHARE