Blog

Sep 11, 2017 · post

Interpretability in conversation with Patrick Hall and Sameer Singh

We’re pleased to share the recording of our recent webinar on machine learning interpretability and accompanying resources.

We were joined by guests Patrick Hall (Senior Director for Data Science Products at H2o.ai, co-author of Ideas on Interpreting Machine Learning) and Sameer Singh (Assistant Professor of Computer Science at UC Irvine, co-creator of LIME).

We spoke for an hour and got lots of fantastic questions during that time. We didn’t have time to answer them all, so Patrick and Sameer have been kind enough to answer many of them below.

We’re also glad to share contact information for all the participants and links to code and further reading. Please get in touch with any of us if you’re interested in working together.

Contact

Code, demos and applications

Reading

Talks

Audience questions we didn’t address during the webinar

Is there a standard way to measure model complexity?

Patrick: Not that I am aware of, but we use and have put forward this simple heuristic:

  • Linear, monotonic models - easily interpretable
  • Nonlinear, monotonic models - interpretable
  • Nonlinear, non-monotonic models - difficult to interpret

Mike: one option, when comparing like with like, is simply the number of parameters. This is a common metric in the deep learning community. It glosses over some of what we really mean when we say “complex”, but it gets at something.

Sameer: Complexity is very subjective, and in different contexts, different definitions are useful. I also agree that number of parameters are often quite a useful proxy to complexity. One other metric I like is running time or energy consumed for each prediction. Of course, there is some theoretical work on this as well, such as VC dimensionality or even Kolmogorov complexity. The open question is which of these measures of complexity correlates with a user’s capacity to interpret it.

Is there really a trade-off in call cases between interpretability and accuracy? There are certainly cases where there isn’t, e.g Rich Caruana’s pneumonia model. Can you characterize where this trade-off exists and doesn’t?

Patrick: I think we are making an assumption that greater accuracy requires greater complexity – which it often does for predictive modeling. So, maybe it’s more accurate to say there is a trade-off between interpretability and complexity. Humans cannot, in general, understand models with thousands, let alone millions, of rules or parameters – this level of complexity is common in machine learning – and this level of complexity is often required to model real-life, nonlinear phenomena. For a linear model, I probably agree with the questioner that the trade-off may not be as impactful, as long as the number of parameters in the linear model is relatively small.

Mike: This may be stating the obvious, but I’d also add that, in situations where you can get high enough accuracy for your use case with a model so simple it’s interpretable by inspection (which does happen!), there is of course no trade-off. You can have it all!

Is the black box aspect of machine learning programming only an early AI development issue? Will it eventually be possible to program in “check points” where programmed models will reveal key points or factors that appear within levels of neural network calculations?

Patrick: I don’t think this is an early AI issue. In my opinion, it’s about the fundamental complexity of the generated models. Again, the sheer volume of information is not interpretable to humans – not even touching on more subtle complications. I don’t mean big data either – even though that often doesn’t help make things any clearer – I mean that I don’t think anyone can understand a mathematical formula that requires 500 MB just to store it’s rules and parameters. (Which is not uncommon in practice.) I do like the idea of training checkpoints, but what if at the checkpoint, the model says: “these are the 10,000 most important extracted features which represent 100+ degree combinations of the original model inputs”? So perhaps the combination of training checkpoints plus constraints on complexity could be very useful.

Conversations in data science center around the latest/greatest models, not interpretability. Do you have any recommendations for building a company culture that values interpretability.

Mike: send your colleagues our blog post The Business Case for Machine Learning Interpretability! Interpretability models are profitable, safer and more intellectually rewarding. Hopefully every one of your colleagues is interested in at least one of those things.

Patrick: In my opinion, I’d also say this is part of customer-focus in an analytics tool provider’s culture. It’s usually us data-nerds who want to use our new toys. I almost never hear a customer say, “give me a model using the latest and greatest algorithm, oh, and it’s fine if it’s super complex and not interpretable.”

Sameer: Partly, it comes from the fact that accuracy provides a single number, which appeals to the human strive for competition and for sports, and for engineering things that beat other things. Interpretability is, almost by definition, much more fuzzier to define and evaluate, making us a little nervous as empiricists, I think.

How does interpretability varies across industry, e.g. aviation v media v financial services?

Patrick: I can only say that the regulations for predictive models are probably most mature in credit lending in the U.S., and that I see machine learning being used more prominently in verticals outside credit lending, i.e. e-commerce, marketing, anti-fraud, anti-money-laundering.

Mike: I’d say that, of the particular three mentioned, the need for interpretability is most acute in aviation. In fact, it goes beyond interpretability into formal verifiability of the properties of an algorithm, which is a whole different ball of wax. The acknowledged need is least in media, because there’s relatively little regulation. Which is not to say it wouldn’t be profitable to apply these techniques in this or any other industry where it’s important to understand your customers. Financial services is interesting. The need for interpretability is well-understood (and hopefully well-enforced) there. There’s no question, however, that more accurate models would make more money. People are starting to build neural network-based trading and lending models that satisfy applicable regulations, e.g. SR 11-7, and the Fair Credit Reporting Act. There’s a huge first-to-market advantage in deploying these more accurate models.

Model governance and model reviews are standard for financial models as are stress tests. Do you see something similar in the future of industry ML models?

Patrick: I don’t know why so few machine learning practitioners stress-test their models. It’s easy to do with simple sensitivity analysis, and the financial risks of out-of-range predictions on new data are staggering! I do also hope that machine learning models that make serious impacts on people’s lives will be better regulated in the future, and the EU is taking steps toward this with the GDPR. In the meantime, you can keep up with research in this area at FATML.

Mike: I also recommend What’s your ML test score? A rubric for ML production systems, which mentions a bunch of really basic stuff that far too few of us do.

What effect will interpretability have on feature selection?

Mike: Anecdotally, we spotted a bunch of problems with our model of customer churn using LIME. In particular, as non-experts in the domain, we’d left in features that were leaking the target variable. These lit up like Christmas trees in our web interface thanks to LIME.

Patrick: I think it will prevent people from using overly-complicated, difficult to explain features in their models as well. It’s no good to say CV_SVD23_Cluster99_INCOME_RANGE is the most important variable in the prediction if you can’t also say directly what that variable is exactly and how it was derived.

I’m a graduate DS student who just sent some ML research to a group of people in industry who I thought would be interested. In response I got the question “will your research replace my job”. What are some ways to overcome the fear of ML and convince people that AI won’t replace the creativity in decision making of humans.

Patrick: Well it might one day – and we all need to be realistic about that. But for today, and likely for many years, most of us can rest easy. Today, machine learning is only good at specific tasks: tasks where there is a large amount of labeled, easy-to-use “clean” data that has also been labeled.

Sameer: For now, you can use the explanations almost as a way to show that machine learning is not a magical black-box. Without an explanation, a natural reaction is to say “how could it predict this? It must be super-intelligent!", but with an explanation and demystifies this, even if it is doing the right thing for right reasons, the perception of machine learning will not be of an adversary.

Why is it that some models are seen as interpretable and others aren’t? There are large tomes on the theory of linear models, yet they’re seen as interpretable. Could part of this be due to how long they’ve been taught?

Mike: this is a great point. I don’t think it’s simply due to our relative familiarity with linear models. It’s that a trained linear model really is simple to describe (and interpret). Trained neural networks are, in a relative sense, not even simple to describe. The big linear modeling textbooks are about the long textbooks’ deep domain-specific implications, difficulties like causality, and the real numerical/engineering subtleties.

Patrick: I 100% agree with the questioner’s sentiment. Essentially linear model interpretations are exact and stable, which is great, but the models are approximate. Machine learning explanations take a different mindset: machine learning explanations are less stable and exact, but the model itself is usually much less approximate. So, do you prefer an exact explanation for an approximate model? Or an approximate explanation for an exact model? In my opinion, both are useful.

Sameer: Interpretability is relative. I don’t think we should hold linear models as the ideal in interpretability, because it is not, especially with large number of variables. One of the known problems with linear models is correlated features, i.e. the importance of a feature can get distributed to correlated features, making features that are less important, but uncorrelated, have a higher weight. We tried to get around this somewhat in LIME by restricting the number of features chosen as an explanation (L1 regularization or Lasso), and normalizing the regression variables over our samples (to reduce the effect of the bias).

Once we identify biases, how do we address them?

Patrick: Problematic features – such as those correlated to race, gender, marital status, disability status, etc. – can be removed from the input data and the model can be retrained. Or features can be intentionally corrupted to remove problematic information with techniques like differential privacy during model training. Another method I’m personally interested is determining the local contribution of problematic features using something like LOCO or LIME and subtracting out the different contributions of problematic features row-by-row when predictions are made.

Aren’t we reducing interpretability to visual analytics of sensitivity?

Patrick: In some cases yes, but I would argue this is a good thing. In my opinion, explanations themselves have to be simple. However, I’m more interested in fostering the understanding of someone who was just denied parole or a credit card (both of which are happening today) based on the decision of a predictive model. For the mass-consumer audience, it’s not an effective strategy to provide explanations that are just as mathematically complex as the original model.

How is LIME different than variable importance, which we get from different algorithms (e.g. RFs?)

Patrick: The key is locality. LIME essentially provides local variable importance, meaning that you often get a different variable importance value for each input variable for each row of the data set. This opens up the possibility of describing why a machine learning model made the prediction it made for each customer, patient, transaction, etc. in the data set.

Sameer: To add to that, I would say the difference between global and local dependence can sometimes be quite important. Aggregations used to compute global dependence, like variable importance, can sometimes drown signals. For example, if race is being used to make a decision for a really small number of individuals, it might not show up in the global aggregations. Similarly, local explanations are also useful in showing the sign of the dependence in context, i.e. age might be important overall, but for some individuals age might act as a negative factor, and for a positive, and global explanations will not be able to capture that. That said, it’s much easier to look at only the big picture, instead of many small pictures.

Which bootstrapping algorithm is used by LIME generate the perturbed samples

Sameer: This is often domain dependent, and you can plug in your own. We tried to stick with pretty simple techniques for each domain, such as removing tokens in text, patches in images, etc. More details are in the paper/code.

In the case of adversarial attacks, can LIME detect what causes the deviation from correct detection.

Sameer: (excerpt from an email thread with Adam) This is quite an interesting idea, but unfortunately, I believe LIME will get quite stumped in this case, especially for images, either proposing the whole image as the explanation (assuming the adversarial noise is spread out, as it often is), or find a “low confidence” explanation, i.e. it’ll find the subset of the image that is most adversarial, but with sufficient uncertainty to say “don’t take this explanation too seriously”.

Can you explain the significance of the clusters in the H2O interpretability interface?

Patrick: We chose to use clusters in the training data, instead of bootstrapped or simulated samples around a row of data, to construct local regions on which to build explanatory linear models. This has two primary advantages:

  • We don’t need a new/different sample for every point we want to explain
  • It allows us to present the (hopefully helpful) diagnostic plot of the training data, complex model, and explanatory model that you saw in the webinar.

The main drawback is that sometimes clusters are large and the fit of the explanatory model can degrade in this case. If you’re curious, we choose the number clusters by maximizing the R-squared between all the linear model predictions and the complex model’s predictions.

LIME makes accurate models more interpretable. Also mentioned was the related idea of making interpretable models more accurate. Which is more promising?

Patrick: Most research I see is towards making accurate models more interpretable. One nice practical approach for going the other direction – making interpretable models more accurate – are the monotonicity constraints in XGBoost.

Sameer: Personally, I like the former, since I do believe an inaccurate model is not a useful model. I also don’t want to restrict the architecture or the algorithms that people want to use, nor do I want to constrain them to certain types of interpretations that an interpretable model provides.

Mailing list

Our public mailing list is a great way of getting a taste of what Fast Forward Labs is interested in and working on right now. We hope you’ll sign up!

Read more

Newer
Sep 26, 2017 · post
Older
Sep 7, 2017 · post

Latest posts

Nov 15, 2022 · newsletter

CFFL November Newsletter

November 2022 Perhaps November conjures thoughts of holiday feasts and festivities, but for us, it’s the perfect time to chew the fat about machine learning! Make room on your plate for a peek behind the scenes into our current research on harnessing synthetic image generation to improve classification tasks. And, as usual, we reflect on our favorite reads of the month. New Research! In the first half of this year, we focused on natural language processing with our Text Style Transfer blog series.
...read more
Nov 14, 2022 · post

Implementing CycleGAN

by Michael Gallaspy · Introduction This post documents the first part of a research effort to quantify the impact of synthetic data augmentation in training a deep learning model for detecting manufacturing defects on steel surfaces. We chose to generate synthetic data using CycleGAN,1 an architecture involving several networks that jointly learn a mapping between two image domains from unpaired examples (I’ll elaborate below). Research from recent years has demonstrated improvement on tasks like defect detection2 and image segmentation3 by augmenting real image data sets with synthetic data, since deep learning algorithms require massive amounts of data, and data collection can easily become a bottleneck.
...read more
Oct 20, 2022 · newsletter

CFFL October Newsletter

October 2022 We’ve got another action-packed newsletter for October! Highlights this month include the re-release of a classic CFFL research report, an example-heavy tutorial on Dask for distributed ML, and our picks for the best reads of the month. Open Data Science Conference Cloudera Fast Forward Labs will be at ODSC West near San Fransisco on November 1st-3rd, 2022! If you’ll be in the Bay Area, don’t miss Andrew and Melanie who will be presenting our recent research on Neutralizing Subjectivity Bias with HuggingFace Transformers.
...read more
Sep 21, 2022 · newsletter

CFFL September Newsletter

September 2022 Welcome to the September edition of the Cloudera Fast Forward Labs newsletter. This month we’re talking about ethics and we have all kinds of goodies to share including the final installment of our Text Style Transfer series and a couple of offerings from our newest research engineer. Throw in some choice must-reads and an ASR demo, and you’ve got yourself an action-packed newsletter! New Research! Ethical Considerations When Designing an NLG System In the final post of our blog series on Text Style Transfer, we discuss some ethical considerations when working with natural language generation systems, and describe the design of our prototype application: Exploring Intelligent Writing Assistance.
...read more
Sep 8, 2022 · post

Thought experiment: Human-centric machine learning for comic book creation

by Michael Gallaspy · This post has a companion piece: Ethics Sheet for AI-assisted Comic Book Art Generation I want to make a comic book. Actually, I want to make tools for making comic books. See, the problem is, I can’t draw too good. I mean, I’m working on it. Check out these self portraits drawn 6 months apart: Left: “Sad Face”. February 2022. Right: “Eyyyy”. August 2022. But I have a long way to go until my illustrations would be considered professional quality, notwithstanding the time it would take me to develop the many other skills needed for making comic books.
...read more
Aug 18, 2022 · newsletter

CFFL August Newsletter

August 2022 Welcome to the August edition of the Cloudera Fast Forward Labs newsletter. This month we’re thrilled to introduce a new member of the FFL team, share TWO new applied machine learning prototypes we’ve built, and, as always, offer up some intriguing reads. New Research Engineer! If you’re a regular reader of our newsletter, you likely noticed that we’ve been searching for new research engineers to join the Cloudera Fast Forward Labs team.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Notebook

ASR with Whisper

Explore the capabilities of OpenAI's Whisper for automatic speech recognition by creating your own voice recordings!
https://colab.research.google.com/github/fastforwardlabs/whisper-openai/blob/master/WhisperDemo.ipynb
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com

Cloudera Fast Forward Labs

Making the recently possible useful.

Cloudera Fast Forward Labs is an applied machine learning research group. Our mission is to empower enterprise data science practitioners to apply emergent academic research to production machine learning use cases in practical and socially responsible ways, while also driving innovation through the Cloudera ecosystem. Our team brings thoughtful, creative, and diverse perspectives to deeply researched work. In this way, we strive to help organizations make the most of their ML investment as well as educate and inspire the broader machine learning and data science community.

Cloudera   Blog   Twitter

©2022 Cloudera, Inc. All rights reserved.