Featured Post

New Research on Interpretability

Aug 2 2017 · by with Micha

We’re excited to release the latest prototype and report from our machine intelligence R&D team: Interpretability.

FF06 Interpretability

An interpretable algorithm is one whose decisions you can explain. You can better rely on such a model to be safe, accurate and useful.

Our prototype shows how new ideas in interpretability research can be used to extract actionable insight from black-box machine learning models.

And our report describes breakthroughs in interpretability research and places them in a commercial, legal and ethical context.

This research is relevant to anyone who designs systems using machine learning, from engineers and data scientists, to business leaders and executives who are considering new product opportunities.

The Power of Interpretability

A model you can interpret and understand is one you can more easily improve. It is also one you, regulators, and society can more easily trust to be safe and nondiscriminatory. And an accurate model that is also interpretable can offer insights that can be used to change real-world outcomes for the better.

How does any of this work

There is a central tension, however, between accuracy and interpretability: the most accurate models are necessarily the hardest to understand. Our report looks closely at two recent breakthroughs that resolve this tension. New white-box algorithms offer better performance while guaranteeing interpretability. Meanwhile, model-agnostic interpretability techniques allow you to peer inside black-box models.

Our report explains how these techniques work at both a conceptual and technical level, and then discusses the commercial opportunities for their application.


Our prototype, meanwhile, makes these possibilities concrete. We applied a model-agnostic tool called LIME to a black-box model, in order to better understand the reasons a subscription business loses customers. An accurate model that predicts which customers your business is about to lose is useful. But it’s much more useful if you can also see why they are about to leave. In this way, you learn about weaknesses in your business, and can perhaps even intervene to prevent the losses.

More Important than Ever

Work on machine learning interpretability is more important than ever. Our society is increasingly dependent on intelligent machines. Algorithms govern everything from which e-mails reach our inboxes to whether we are approved for credit to whom we get the opportunity to date. And their impact on our experience of the world is growing.

This rise in the use of algorithms coincides with a surge in the capabilities of black-box techniques, or algorithms whose inner workings cannot easily be explained. The question of interpretability has been important in applied machine learning for many years, but as relatively uninterpretable techniques like deep learning grow in popularity, it’s becoming an urgent concern. These techniques offer breakthrough capabilities in analyzing and even generating rich media and text data, but it’s often hard to figure out how they do what they do.

The future is algorithmic. Interpretable models offer a safer, more productive, and ultimately more collaborative relationship between humans and intelligent machines.

Learn More

We will host a public webinar on interpretability on September 6 2017. We’ll be joined by guests Patrick Hall (Senior Data Scientist at H2O, co-author of Ideas on Interpreting Machine Learning) and Sameer Singh (Assistant Professor of Computer Science at UC Irvine, co-creator of LIME, a model-agnostic tool for extracting explanations from black box machine learning models).

More from the Blog