Banks and lenders can make more money with less risk by adopting machine learning (ML) in their credit underwriting — only if they can trust that the ML models they've built are doing what they're designed to do. That requires deep and accurate interpretations of ML credit-scoring algorithms. ML explainability is something we spend a lot of time obsessing about at Zest. With a variety of ML explainability techniques out there, lenders should know they've got the right tools to deliver consistent and accurate explanations of an ML model. Choosing the wrong method can get you the wrong answer, and in the business of financial services, this can be a big, painful problem.
Our CTO Jay Budzik was recently on the AI at Work podcast, outlining the Zest approach to explainability. A couple of studies we recently published showed that not all explainability methods are as consistent, accurate, and fast as they ought to be. Says Jay, “If you're going to use ML to run a billion dollar lending business, you probably want to know that it's doing the right thing. We focused a lot on making AI transparent and explainable so that lenders can get comfortable trusting it."
Jay also goes into why we’re focused on AI for underwriting, our partnership with Microsoft, and how to hire and manage data science teams. “As soon as you establish the right set of metrics and you have the right data set, that's a field day for a data scientist.”