This empirical white paper is part of a broader research project on the explainability and fairness of machine learning in credit underwriting. The research is being conducted in collaboration with Professors Laura Blattner and Jann Spiess at the Stanford Graduate School of Business.
This report considers the capabilities, limitations and performance of proprietary and open-source tools to help lenders manage machine learning underwriting models as required by law. The report focuses on use of the tools in: (1) generating individualized disclosures that state why particular applicants were rejected or charged higher prices; and (2) analyzing what factors in the model drive disparities in model predictions among different demographic groups.
The evaluation analyzes model diagnostic tools from seven technology companies — Arthur AI, H2O.ai, Fiddler AI, Relational AI, Solas AI, Stratyfy, and Zest AI — as well as several open-source tools.
Â