Why ZAML Makes Your ML Platform Better

Subscribe to Our Blog

It’s a good moment for advanced machine learning (ML) — the technology that uses mountains of data and sophisticated math to help users make better decisions. ML has helped reduce bank fraud, made self-driving cars a reality, and powered millions of product recommendations on on Amazon. Now, more businesses want in on the game.

Building ML models has never been easier. You’ve got a wealth of choices among open-source and proprietary platforms from the likes of Microsoft, Amazon, IBM, and smaller players such as DataRobot and H20.ai. You can use any of their platforms to build general-purpose models quickly. The democratization of ML technology is a wonderful thing.

But if you’re in a highly regulated industry such as lending, insurance, or health care, you cannot deploy a model without having full transparency into how that model works. Without that, users could find themselves sorely disappointed — and gravely compromised — by the lack of operational safety of their models.

That’s why we built the ZAML set of software tools: to understand and document how a model was built, explain its predictions, and tell you how it’s operating. ZAML is not a platform, it’s a software layer that makes ML models usable for credit underwriting, insurance, and other regulated activities. Automated, general-purpose ML is great, but it doesn’t satisfy the needs of regulators, auditors, and risk officers that demand a sufficient explanation about why specific loan applications were accepted or rejected.

A lot of ML platforms say they give you explainability but they employ a handful of techniques (such as LIME, permutation impact, and leave-one-column-out) that claim to peer into ML models and extract the top reasons for loan approvals or denials. These methods are all slightly different, but they all have one thing in common: they take a very complex model and pretend it’s easy in order to explain it.

We’ve shown that these approaches are lacking. If the lender can’t accurately show why its ML model approved or denied a loan, that’s a serious liability when millions of dollars are at risk. We believe more than anyone in the power of ML, but ML is only as good as its explainability. Our secret sauce is a proprietary explanation method derived from game theory and multivariate calculus that works on the actual underlying model. For instance, ZAML explainability determines the relative importance of each variable in an ML model to the final score by looking at how it interacts with other variables. No other technique does that.

The ML platforms are great places to innovate and improve your ML arsenal. Just make sure that your model is fit for its purpose. The only thing that matters is ensuring that ML delivers superior business results fairly and consistently. Otherwise, why bother?