Congress is growing increasingly concerned about technology’s role in society and the economy. As part of its fact-gathering effort, the House Financial Services Committee recently convened an AI Task Force and on Wednesday, June 26, held the first in a series of hearings exploring AI’s role in finance. The hearing, “Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services,” is meant to shed light on how financial institutions are using AI, and assess its effects on data privacy and perpetuating bias against low income and minority borrowers.
Zest’s CEO Douglas Merrill had the privilege to be invited to testify at this first hearing, the only CEO invited to do so, alongside other academic and NGO leaders. Since Zest was founded ten years ago, we’ve been on a mission to get banks to use AI and machine learning to make fair and transparent credit available to everyone. Douglas’ testimony, which you can read here, addresses the committee’s concerns about the complexity and hidden biases in machine learning-based lending. The solution to those concerns is strong validation and monitoring of ML models to ensure they’re fair and safe to use. (This is our entire focus at Zest.) Douglas argues that smart regulation and adjustments to existing guidelines can facilitate the appropriate use of ML and AI. The key points he hits in the testimony:
- ML risks are real.
- ML models can be opaque and inherently biased. If companies don’t appropriately validate and monitor their ML models, they’ll put themselves, consumers and the safety of our financial system at risk.
- It’s a black box.
- Too many ML models have a “black box” problem. Companies know the information they put in, and they know the information models spit out, but they can’t explain why the models make specific decisions.
- Explainability is key.
- Not all techniques for untangling ML model results are created equally, in fact many don’t work at all. ZestFinance developed a new explainability technique that renders full ML models transparent in an accurate, consistent and fast way.
- More regulation isn’t necessary.
- Regulators already have the authority they need to balance the risks and benefits of ML. Federal guidance for model risk management, which was written in 2011, just needs to be updated to align with the modern ML era.
For the full transcript click here.