Zest's ZAML suite of tools provides full machine learning (ML) transparency. That means total awareness of what's driving disparity in your credit models. Until now, lenders had to sacrifice accuracy to reduce disparity in underwriting.
We don't think you have to sacrifice accuracy to be fair.
The traditional way of reducing bias is a slow and manual process that often comes late in the modeling process. It requires making lots of hard choices between fairness and performance and then re-running the model. Lenders end up simply tossing out offending credit signals, which leaves a lot of performance on the table. With ZAML Fair, lenders are in control and able to pick a better model at a fraction of the time and effort required by legacy techniques.
ZAML Fair then deploys a "helper AI" that combines with the existing model to carefully reduce the impact of the offending variables that drive racial and gender disparity, which can be common credit signals such as income and the traditional credit score. You can't toss those out, but you can mitigate their impact.
Lenders shouldn’t have to choose between fairness and accuracy. With ZAML Fair, they can optimize for both.