Microsoft Open-Sources lnterpretML for Solving AI Black Box Problem

1
4025

This software package allows developers to compare and contrast the explanations produced by different methods, and select methods that best suit their needs

Microsoft Research has open-sourced a software toolkit, called lnterpretML, for training interpretable models and explaining black box systems.

Open-sourced under the MIT license, InterpretML is aimed at enabling developers to experiment with a variety of methods for explaining models and systems.

InterpretML implements a number of intelligible models—including Explainable Boosting Machine (EBM) – an improvement over generalized additive models – and several methods for generating explanations of the behavior of black-box models or their individual predictions.

An open-source software package to enable intelligibility in machine learning

Intelligibility is an area of cutting-edge, interdisciplinary research, building on ideas from machine learning, psychology, human-computer interaction and design. Researchers at Microsoft have been working on how to create intelligible AI for years.

Historically, the most intelligible models were not very accurate, and the most accurate models were not intelligible, noted the company.

Finally, Microsoft Research has been able to develop an algorithm called the Explainable Boosting Machine (EBM) which has both high accuracy and intelligibility.

“EBM uses modern machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability,” Microsoft explains.

In addition to EBM, InterpretML also supports methods like LIME, SHAP, linear models, partial dependence, decision trees and rule lists.

“By having an easy way to access many intelligibility methods, developers will be able to compare and contrast the explanations produced by different methods, and to select methods that best suit their needs,” researchers at Microsoft write in a blog post.

Such comparisons can also help data scientists understand how much to trust the explanations by checking for consistency between methods, they say.

InterpretML is currently available in alpha on GitHub.

1 COMMENT

  1. Open-source software is any sort of computer software that’s distributed with its source code available for modification. That means it usually includes a license for programmers to change the software in any way they choose: They can fix bugs, improve functions, or adapt the software to suit their own needs

LEAVE A REPLY

Please enter your comment!
Please enter your name here