We’re excited to release the latest prototype and report from our machine intelligence R&D team: Interpretability.
An interpretable algorithm is one whose decisions you can explain. You can better rely on such a model to be safe, accurate and useful.
Our prototype shows how new ideas in interpretability research can be used to extract actionable insight from black-box machine learning models.
And our report describes breakthroughs in interpretability research and places them in a commercial, legal and ethical context.
This research is relevant to anyone who designs systems using machine learning, from engineers and data scientists, to business leaders and executives who are considering new product opportunities.
The Power of Interpretability
A model you can interpret and understand is one you can more easily improve. It is also one you, regulators, and society can more easily trust to be safe and nondiscriminatory. And an accurate model that is also interpretable can offer insights that can be used to change real-world outcomes for the better.
There is a central tension, however, between accuracy and interpretability: the most accurate models are necessarily the hardest to understand. Our report looks closely at two recent breakthroughs that resolve this tension. New white-box algorithms offer better performance while guaranteeing interpretability. Meanwhile, model-agnostic interpretability techniques allow you to peer inside black-box models.
Our report explains how these techniques work at both a conceptual and technical level, and then discusses the commercial opportunities for their application.
Our prototype, meanwhile, makes these possibilities concrete. We applied a model-agnostic tool called LIME to a black-box model, in order to better understand the reasons a subscription business loses customers. An accurate model that predicts which customers your business is about to lose is useful. But it’s much more useful if you can also see why they are about to leave. In this way, you learn about weaknesses in your business, and can perhaps even intervene to prevent the losses.
More Important than Ever
Work on machine learning interpretability is more important than ever. Our society is increasingly dependent on intelligent machines. Algorithms govern everything from which e-mails reach our inboxes to whether we are approved for credit to whom we get the opportunity to date. And their impact on our experience of the world is growing.
This rise in the use of algorithms coincides with a surge in the capabilities of black-box techniques, or algorithms whose inner workings cannot easily be explained. The question of interpretability has been important in applied machine learning for many years, but as relatively uninterpretable techniques like deep learning grow in popularity, it’s becoming an urgent concern. These techniques offer breakthrough capabilities in analyzing and even generating rich media and text data, but it’s often hard to figure out how they do what they do.
The future is algorithmic. Interpretable models offer a safer, more productive, and ultimately more collaborative relationship between humans and intelligent machines.
We will host a public webinar on interpretability on September 6 2017. We’ll be joined by guests Patrick Hall (Senior Data Scientist at H2O, co-author of Ideas on Interpreting Machine Learning) and Sameer Singh (Assistant Professor of Computer Science at UC Irvine, co-creator of LIME, a model-agnostic tool for extracting explanations from black box machine learning models).
How to Access our Reports and Prototypes
We offer our research on interpretability in a few ways:
- Annual research subscription (for individuals and corporate members)
- Subscription and advising (research and time with our team)
- Special projects and workshops (help to build a great data product or strategy)
Email us at email@example.com if you’d like to learn more!
More from the Blog
Jul 6 2017
by — This article contains highlights from a series of three interactive video tutorials on probabilistic programming from scratch published on O’Reilly Safari (login required). If you’re interested in the business case for probabilistic programming the Fast Forward Labs report discusses it in detail, and compares modern industrial strength systems like Stan and PyMC3. Please get in touch if you’re...
Aug 2 2017
by — Last week we launched the latest prototype and report from our machine intelligence R&D team: Interpretability. Our prototype shows how new ideas in interpretability research can be used to extract actionable insights from black-box machine learning models; our report describes breakthroughs in interpretability research and places them in a commercial, legal and ethical context. This resea...
Sep 17 2018
Deep learning has provided extraordinary advances in problem spaces that are poorly solved by other approaches. This success is due to several key departures from traditional machine learning that allow it to excel when applied to unstructured data. Today, deep learning models can play games, detect cancer, talk to humans, and drive cars. But the differences that make deep learning power...