Blog

Feb 20, 2020 · post

Building Blip: behind the scenes of our anomaly detection prototype

A screenshot of the Blip prototype.

Our anomaly detection prototype, Blip, shows how four different algorithms perform at detecting network attacks. Here is a look at some of the design, visualization, and front-end programming decisions that went into it. (For more on the algorithms themselves, check out the prototype section of our report.)

Tomato sorting

The concept of an anomaly is easy to visualize: it’s something that doesn’t look the same. The conceptual simplicity of it actually makes our prototype’s job trickier. If we show you a dataset where the anomalies are easy to spot, it’s not clear what you need an algorithm for. So instead, we want to place you in a situation where the data is complicated enough, and streaming in fast enough, that the benefits of the algorithm are clear.

A tomato sorter sorting green tomatoes from red tomatoes.

But what does that look like? We took some inspiration from a viral youtube video of a machine sorting tomatoes by color at an incredible speed. The video is mesmerizing to watch, and it makes its point clearly: a human definitely can sort green tomatoes from red ones, but not at anything close to that speed.

An early version of the prototype with tomato-sorter inspired levers

So we thought, what would it look to visualize each algorithm’s classification at a really fast speed? We had worked in a similar realm with Turbofan Tycoon, our federated learning prototype which visualizes the success of different algorithms at predictiong image failure. In early versions of Blip, I experimented with an animation that mimicked the tomato sorter motion – a tiny lever that knocked data classified as anomalous outside of the ‘good data’ bucket. Ultimately, I dropped the animation for performance reasons.

I’ll talk more about performance later. Let’s talk about continuity next.

Continuity

Blip features three different sections, with the data synced across all three.

In the final prototype, there are three sections. “Connections” shows the data streaming in, “Strategies” shows performance metrics for each of the algorithms, and “Visualized” visualizes the performance on each piece of data. One of the core concepts of the prototype is that the data is in sync across all of these sections. Having the data be in sync and visibly updating everywhere was our main strategy for establishing continuity across the sections: communicating to the user that the data was the same everywhere, and that the difference in the algorithm visualizations is a product of the difference in their classification performance.

As I mentioned above, I initially attempted to use animation to establish continuity between the data. Continuity is one of the things animation is best at, and I still believe that something which showed each piece of data directly flowing from the connections into each algorithm’s visualization, where it is then acted on by a sorting mechanism, would be a strong illustration of the process. Design is a series of trade-offs, however, and I judged the complexity that would be necessary for animating across the sections (which have to work across different screen sizes and therefore layouts) to not be worth it for this case. Instead, I focused on speed and on simplifying all the visual elements (like the color scheme, and the left-right classification system) to help guide the user into seeing the continuity (connecting the dots themselves).

Once I had decided to forgo complex animations, I leaned harder on one of the other inspirations for the prototype: terminal applications, where quickly updating numbers and animation restricted to a character grid are put to creative use. I was also inspired by the process of inspecting and debugging programs. You can pause the simulation to inspect the current state, and from there step through it one “tick” at a time. By doing that you can verify that the newest connection is classified in each visualization immediately as it appears (and that the totals in the performance metrics reflect that).

One place where I did end up adding more complex animation back in was in the strategy ranking. I used a package called react-flip-move to animate changes in the rankings. Seeing the rankings move, rather than just appear immediately in their new spot, makes it much easier to parse what changes are happening.

Visualizing classification

The visualizations show the classification performance of each of the algorithms.

One of the most interesting challenges in the prototype was figuring out how I wanted to spatially visualize the classifications. From early on I thought I wanted two piles, one of positive classifications and one of negative, with the truth shown through color. I initially thought I would place a divider between the two sections, but there I ran into a subtle problem. I wanted all of the data to fit in the visualization (I didn’t want to have to adjust the layout half-way through the simulation to fit data still coming in). This wasn’t too much of a problem; since I knew we were going to use 10,000 data points, I just needed to make sure the starting area was larger than 10,000.

The problem was the divider. If I put a vertical divider between the positive and negative classifications I would be telegraphing the balance of the classifications at the start of the simulation and ruining any suspense. The solution turned out to be obvious in retrospect. There is no divider, and each classification side builds from the outside in, so that the balance of the division only becomes visible at the end.

Performance

Part of building Blip was making sure it could run all of these updates really fast and in sync. For the front-end I used the React library, and the visualizations are drawn using the canvas element (canvas is generally faster than drawing/animating HTML elements). React and canvas have different update models which makes using them together… interesting. It’s something I’ve done a few times now, so I came in with a gameplan.

The tick counter state lives in React; each tick (watched using React useEffect hooks) triggers new calculations, and the results of those calculations are drawn to the canvas element. Where things get a bit Frankenstein-y is in the mix of text and visual updates. Canvas is not great at text. It doesn’t do text-wrapping by default and unless you’re very careful the text comes out blurry. What I ended up doing was rendering the text as DOM elements overlaid on top of the canvas illustrations. This works, but means a lot of my layout logic is duplicated between canvas (where it is basically just pixel math) and the DOM (where I have access to CSS layout stuff). Everytime I end up in this situation, I think I should just bite the bullet and do everything in canvas. Even though it would mean more work up front it would result in less mental overhead as the project evolves. But, as I think I mentioned, design is about trade-offs, and I didn’t have time to nail down a pure canvas approach this time. So it is a (working!) mish-mash of DOM and canvas. At least the recent innovation of React hooks made the update logic a lot cleaner for me this round.

(That is the very abbreviated version of my thoughts on React and canvas, if you want to hear more about, or have specific questions just let me know (@grantcuster on Twitter or email).)

Check it out

Be sure to try Blip - and read more about its making (including the algorithms) and all things deep learning for anomaly detection in our report.

Read more

Newer
Feb 27, 2020 · post
Older
Feb 5, 2020 · featured post

Latest posts

Sep 22, 2021 · post

Automatic Summarization from TextRank to Transformers

by Melanie Beck · Automatic summarization is a task in which a machine distills a large amount of data into a subset (the summary) that retains the most relevant and important information from the whole. While traditionally applied to text, automatic summarization can include other formats such as images or audio. In this article we’ll cover the main approaches to automatic text summarization, talk about what makes for a good summary, and introduce Summarize. – a summarization prototype we built that showcases several automatic summarization techniques.
...read more
Sep 21, 2021 · post

Extractive Summarization with Sentence-BERT

by Victor Dibia · In extractive summarization, the task is to identify a subset of text (e.g., sentences) from a document that can then be assembled into a summary. Overall, we can treat extractive summarization as a recommendation problem. That is, given a query, recommend a set of sentences that are relevant. The query here is the document, relevance is a measure of whether a given sentence belongs in the document summary. How we go about obtaining this measure of relevance varies (a common dilemma for any recommendation system).
...read more
Sep 20, 2021 · post

How (and when) to enable early stopping for Gensim's Word2Vec

by Melanie Beck · The Gensim library is a staple of the NLP stack. While it primarily focuses on topic modeling and similarity for documents, it also supports several word embedding algorithms, including what is likely the best-known implementation of Word2Vec. Word embedding models like Word2Vec use unlabeled data to learn vector representations for each token in a corpus. These embeddings can then be used as features in myriad downstream tasks such as classification, clustering, or recommendation systems.
...read more
Jul 7, 2021 · post

Exploring Multi-Objective Hyperparameter Optimization

By Chris and Melanie. The machine learning life cycle is more than data + model = API. We know there is a wealth of subtlety and finesse involved in data cleaning and feature engineering. In the same vein, there is more to model-building than feeding data in and reading off a prediction. ML model building requires thoughtfulness both in terms of which metric to optimize for a given problem, and how best to optimize your model for that metric!
...read more
Jun 9, 2021 ·

Deep Metric Learning for Signature Verification

By Victor and Andrew. TLDR; This post provides an overview of metric learning loss functions (constrastive, triplet, quadruplet, and group loss), and results from applying contrastive and triplet loss to the task of signature verification. A complete list of the posts in this series is outlined below: Pretrained Models as Baselines for Signature Verification -- Part 1: Deep Learning for Automatic Offline Signature Verification: An Introduction Part 2: Pretrained Models as Baselines for Signature Verification Part 3: Deep Metric Learning for Signature Verification In our previous blog post, we discussed how pretrained models can serve as strong baselines for the task of signature verification.
...read more
May 27, 2021 · post

Pretrained Models as a Strong Baseline for Automatic Signature Verification

By Victor and Andrew. Figure 1. Baseline approach for automatic signature verification using pretrained models TLDR; This post describes how pretrained image classification models can be used as strong baselines for the task of signature verification. The full list of posts in the series is outlined below: Pretrained Models as Baselines for Signature Verification -- Part 1: Deep Learning for Automatic Offline Signature Verification: An Introduction Part 2: Pretrained Models as Baselines for Signature Verification Part 3: Deep Metric Learning for Signature Verification As discussed in our introductory blog post, offline signature verification is a biometric verification task that aims to discriminate between genuine and forged samples of handwritten signatures.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com
Notebook

Interpretability Revisited: SHAP and LIME

Explore how to use LIME and SHAP for interpretability.
https://colab.research.google.com/drive/1pjPzsw_uZew-Zcz646JTkRDhF2GkPk0N

About

Cloudera Fast Forward is an applied machine learning reseach group.
Cloudera   Blog   Twitter