Blog

Apr 2, 2019 · featured post

A Guide to Learning with Limited Labeled Data

We are excited to release Learning with Limited Labeled Data, the latest report and prototype from Cloudera Fast Forward Labs.

Being able to learn with limited labeled data relaxes the stringent labeled data requirement for supervised machine learning. Our report focuses on active learning, a technique that relies on collaboration between machines and humans to label smartly.

Active learning makes it possible to build applications using a small set of labeled data, and enables enterprises to leverage their large pools of unlabeled data. In this blog post, we explore how active learning works. (For a higher level introduction, please see our previous blogpost.)

The active learning loop

Active learning takes advantage of the collaboration between humans and machines to smartly select a small subset of datapoints for which to obtain labels. It is an iterative process, and ideally access is available to some initial labels to start. These initial labels allow a human to build a baseline machine learning model, and use it to predict outputs for all the unlabeled datapoints. The model then looks through all its predictions, flags the one with which it has the most difficulty, and requests a label for it. A human steps in to provide the label, and the newly labeled data is combined with the initial labeled data to improve the model. Model performance is recorded, and the process repeats.

The active learning loop

How to select datapoints

At the heart of active learning is a machine (learner) that requests labels for datapoints that it finds particularly hard to predict. The learner follows a strategy, and uses it to identify these datapoints. To evaluate the effectiveness of the strategy, a simple approach for choosing datapoints needs to be defined. A good starting point is to remove the intelligence of the learner; the datapoints are chosen independently of what the learner thinks.

Random sampling

When we take the learner out of the picture, what is left is a pool of unlabeled data and some labeled data from which a model can be built. To improve the model, the only reasonable option is to randomly start labeling more data. This strategy is known as random sampling, and selects unlabeled datapoints from the pool according to no particular criteria. You can think of it as being akin to picking a card from the top of a shuffled deck, then reshuffling the deck without the previously chosen card and repeating the action. Because the learner does not help with the selection process, random sampling is also known as passive learning.

Random sampling is like picking the top card from a shuffled deck

Uncertainty sampling

A slightly more complex strategy is to select datapoints that the model is uncertain about. In uncertainty sampling, the learner looks at all unlabeled datapoints and surfaces the ones about which it is uncertain. Labels are then provided by a human, and fed back into the model to refine it.

But how do we quantify uncertainty? One way is to use the distance between the datapoint and the decision boundary. Datapoints far away from the decision boundary are safe from changes in the decision boundary. This implies that the model has high certainty in these classifications. Datapoints close to the boundary, however, can easily be affected by small changes in the boundary. The model (learner) is not certain about them; a slight shift in the decision boundary will cause them to be classified differently. The margin sampling strategy therefore dictates that we surface the datapoint closest to the boundary and obtain a label for it.

There are many other selection strategies that can be used with active learning. Our report explores some of them in detail.

When to stop

Because active learning is an iterative process, when should we stop? Each label comes with a cost of acquisition - the amount of money and time it takes to acquire the label. With this cost in mind, the stopping criteria can either be static or dynamic. A static criteria sets a budget limit or performance target in the beginning. A dynamic criteria looks at the incremental gain in performance over each round of active learning and stops when it is no longer worthwhile to acquire more labels (the incremental performance plateaus).

Stopping criteria for active learning

Does it work for deep learning?

Deep learning introduces a couple of wrinkles that make direct application of active learning ineffective. The most obvious issue is that adding a single labeled datapoint does not have much impact on deep learning models, which train on batches of data. In addition, because the models need to be retrained until convergence after each point is added, this can become an expensive undertaking – especially when viewed in terms of the performance improvement vs. acquisition cost (time and money) trade-off. One straightforward solution is to select a very large subset of datapoints to label. But depending on the type of heuristics used, this could result in correlated datapoints. Obtaining labels for these datapoints is not ideal – datapoints that are independent and diverse are much more effective at capturing the relationship between input and output.

The second problem is that existing criteria used to help select datapoints do not translate to deep learning easily. Some require computation that does not scale to models with high-dimensional parameters. These approaches are rendered impossible with deep learning. For the criteria that are computationally viable, reinterpretation under the light of deep learning is necessary.

In our report, we take the idea of uncertainty and examine it in the context of deep learning.

Practical considerations

Active learning sounds tempting - with this approach, it is possible to build applications previously constrained by lack of labeled data. But active learning is not a silver bullet.

Choosing a learner and a strategy

Active learning relies on a small subset of labeled data at the beginning to choose both the learner and strategy. The learner is used to make predictions for all the unlabeled data and the strategy selects the datapoints that are difficult. Choosing a learner (or model) for any machine learning problem is difficult, but it is made even more difficult with active learning for two reasons. First, the choice of a learner needs to be made very early on when we only have a small subset of labeled data. Second, the learner is not just used to make predictions, it is used in conjunction with the strategy to surface datapoints that will help refine itself. This tight feedback loop amplifies the effect of a wrong learner.

In addition, some selection strategies result in a labeled dataset that is biased. Margin sampling, for example, surfaces datapoints right around the decision boundary to be labeled. Most datapoints far from the boundary might not even be used in building the model, resulting in a labeled dataset that may not be representative of the entire pool of unlabeled data.

Human biases

Because a human needs to step in to provide labels, this restricts the type of use cases to which active learning can be applied. Humans can label images and annotate text, but we cannot tell if a financial transaction is fraudulent just by looking at the data.

In addition, the data that requires human labeling is by definition more difficult. Under these circumstances, it is easy for a human to inject his own bias and judgement when making labeling decisions.

A pause between iterations

When applying active learning in real life, surfaced datapoints will need to be sent to a human for labeling. The next round of active learning cannot proceed until the newly labeled datapoints are ready.

The length of time between each active learning iteration varies depending on who provides the label. In a research scenario, a data scientist who builds the model and also creates labels will be able to iterate through each round of active learning quickly. In a production scenario, an outsourced labeling team will need more time for data exchange and label (knowledge) transfer to occur.

For active learning to be successful, the pause between iterations should be as small as practically possible. In addition to considering different types of labeling workforce, an efficient pipeline needs to be set up. This pipeline should include a platform for exchanging unlabeled datapoints, a user interface for creating labels, and a platform for transferring the labeled datapoints.

Active Learner

A GIF showing the Active Learner prototype

We built the Active Learner prototype to accompany this report.

Every Cloudera Fast Forward Labs report comes with a prototype. We don’t just write about a new exciting capability in machine learning; we also experiment with it to understand what it can and cannot do.

The prototype for our report on Learning with Limited Labeled Data is called Active Learner. It is a tool that sheds light on and provides intuition for how and why active learning works. The prototype allows one to visualize the process of active learning over different types of datasets and selection strategies. We hope you enjoy exploring it.

Conclusion

Active learning makes it possible to build machine learning models with a small set of labeled data. It offers one way for enterprises to leverage their large pools of unlabeled data for building new products, but it is not the only solution to learning with limited labeled data.

Our report goes into much more detail (including strategies specific to deep learning, resources and recommendations for setting up an active learning production environment, and technical and ethical implications). Join our webinar to learn more, explore the prototype and get in touch if you are interested in accessing the full report (which is available by subscription to our research and advising services).

Read more

Newer
Apr 3, 2019 · post
Older
Mar 29, 2019 · newsletter

Latest posts

Jun 22, 2020 · post

How to Explain HuggingFace BERT for Question Answering NLP Models with TF 2.0

by Victor · Given a question and a passage, the task of Question Answering (QA) focuses on identifying the exact span within the passage that answers the question. Figure 1: In this sample, a BERTbase model gets the answer correct (Achaemenid Persia). Model gradients show that the token “subordinate ..” is impactful in the selection of an answer to the question “Macedonia was under the rule of which country?". This makes sense .. good for BERTbase.
...read more
Jun 16, 2020 · notebook

Evaluating QA: Metrics, Predictions, and the Null Response →

by Melanie · A deep dive into computing QA predictions and when to tell BERT to zip it! In our last post, Building a QA System with BERT on Wikipedia, we used the HuggingFace framework to train BERT on the SQuAD2.0 dataset and built a simple QA system on top of the Wikipedia search engine. This time, we’ll look at how to assess the quality of a BERT-like model for Question Answering.
qa.fastforwardlabs.com
May 19, 2020 · notebook

Building a QA System with BERT on Wikipedia →

by Melanie · So you’ve decided to build a QA system. You want to start with something simple and general so you plan to make it open domain using Wikipedia as a corpus for answering questions. You want to use the best NLP that your compute resources allow (you’re lucky enough to have access to a GPU) so you’re going to focus on the big, flashy Transformer models that are all the rage these days.
qa.fastforwardlabs.com
Apr 28, 2020 · notebook

Intro to Automated Question Answering →

by Melanie · Welcome to the first edition of the Cloudera Fast Forward blog on Natural Language Processing for Question Answering! Throughout this series, we’ll build a Question Answering (QA) system with off-the-shelf algorithms and libraries and blog about our process and what we find along the way. We hope to wind up with a beginning-to-end documentary that provides:
qa.fastforwardlabs.com
Apr 1, 2020 · newsletter

Enterprise Grade ML

by Shioulin · At Cloudera Fast Forward, one of the mechanisms we use to tightly couple machine learning research with application is through application development projects for both internal and external clients. The problems we tackle in these projects are wide ranging and cut across various industries; the end goal is a production system that translates data into business impact. What is Enterprise Grade Machine Learning? Enterprise grade ML, a term mentioned in a paper put forth by Microsoft, refers to ML applications where there is a high level of scrutiny for data handling, model fairness, user privacy, and debuggability.
...read more
Apr 1, 2020 · post

Bias in Knowledge Graphs - Part 1

by Keita · Introduction This is the first part of a series to review Bias in Knowledge Graphs (KG). We aim to describe methods of identifying bias, measuring its impact, and mitigating that impact. For this part, we’ll give a broad overview of this topic. image credit: Mediamodifier from Pixabay Motivation Knowledge graphs, graphs with built-in ontologies, create unique opportunities for data analytics, machine learning, and data mining. They do this by enhancing data with the power of connections and human knowledge.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com
Notebook

Interpretability Revisited: SHAP and LIME

Explore how to use LIME and SHAP for interpretability.
https://colab.research.google.com/drive/1pjPzsw_uZew-Zcz646JTkRDhF2GkPk0N

About

Cloudera Fast Forward is an applied machine learning reseach group.
Cloudera   Blog   Twitter