Blog

Feb 20, 2020 · post

Building Blip: behind the scenes of our anomaly detection prototype

A screenshot of the Blip prototype.

Our anomaly detection prototype, Blip, shows how four different algorithms perform at detecting network attacks. Here is a look at some of the design, visualization, and front-end programming decisions that went into it. (For more on the algorithms themselves, check out the prototype section of our report.)

Tomato sorting

The concept of an anomaly is easy to visualize: it’s something that doesn’t look the same. The conceptual simplicity of it actually makes our prototype’s job trickier. If we show you a dataset where the anomalies are easy to spot, it’s not clear what you need an algorithm for. So instead, we want to place you in a situation where the data is complicated enough, and streaming in fast enough, that the benefits of the algorithm are clear.

A tomato sorter sorting green tomatoes from red tomatoes.

But what does that look like? We took some inspiration from a viral youtube video of a machine sorting tomatoes by color at an incredible speed. The video is mesmerizing to watch, and it makes its point clearly: a human definitely can sort green tomatoes from red ones, but not at anything close to that speed.

An early version of the prototype with tomato-sorter inspired levers

So we thought, what would it look to visualize each algorithm’s classification at a really fast speed? We had worked in a similar realm with Turbofan Tycoon, our federated learning prototype which visualizes the success of different algorithms at predictiong image failure. In early versions of Blip, I experimented with an animation that mimicked the tomato sorter motion – a tiny lever that knocked data classified as anomalous outside of the ‘good data’ bucket. Ultimately, I dropped the animation for performance reasons.

I’ll talk more about performance later. Let’s talk about continuity next.

Continuity

Blip features three different sections, with the data synced across all three.

In the final prototype, there are three sections. “Connections” shows the data streaming in, “Strategies” shows performance metrics for each of the algorithms, and “Visualized” visualizes the performance on each piece of data. One of the core concepts of the prototype is that the data is in sync across all of these sections. Having the data be in sync and visibly updating everywhere was our main strategy for establishing continuity across the sections: communicating to the user that the data was the same everywhere, and that the difference in the algorithm visualizations is a product of the difference in their classification performance.

As I mentioned above, I initially attempted to use animation to establish continuity between the data. Continuity is one of the things animation is best at, and I still believe that something which showed each piece of data directly flowing from the connections into each algorithm’s visualization, where it is then acted on by a sorting mechanism, would be a strong illustration of the process. Design is a series of trade-offs, however, and I judged the complexity that would be necessary for animating across the sections (which have to work across different screen sizes and therefore layouts) to not be worth it for this case. Instead, I focused on speed and on simplifying all the visual elements (like the color scheme, and the left-right classification system) to help guide the user into seeing the continuity (connecting the dots themselves).

Once I had decided to forgo complex animations, I leaned harder on one of the other inspirations for the prototype: terminal applications, where quickly updating numbers and animation restricted to a character grid are put to creative use. I was also inspired by the process of inspecting and debugging programs. You can pause the simulation to inspect the current state, and from there step through it one “tick” at a time. By doing that you can verify that the newest connection is classified in each visualization immediately as it appears (and that the totals in the performance metrics reflect that).

One place where I did end up adding more complex animation back in was in the strategy ranking. I used a package called react-flip-move to animate changes in the rankings. Seeing the rankings move, rather than just appear immediately in their new spot, makes it much easier to parse what changes are happening.

Visualizing classification

The visualizations show the classification performance of each of the algorithms.

One of the most interesting challenges in the prototype was figuring out how I wanted to spatially visualize the classifications. From early on I thought I wanted two piles, one of positive classifications and one of negative, with the truth shown through color. I initially thought I would place a divider between the two sections, but there I ran into a subtle problem. I wanted all of the data to fit in the visualization (I didn’t want to have to adjust the layout half-way through the simulation to fit data still coming in). This wasn’t too much of a problem; since I knew we were going to use 10,000 data points, I just needed to make sure the starting area was larger than 10,000.

The problem was the divider. If I put a vertical divider between the positive and negative classifications I would be telegraphing the balance of the classifications at the start of the simulation and ruining any suspense. The solution turned out to be obvious in retrospect. There is no divider, and each classification side builds from the outside in, so that the balance of the division only becomes visible at the end.

Performance

Part of building Blip was making sure it could run all of these updates really fast and in sync. For the front-end I used the React library, and the visualizations are drawn using the canvas element (canvas is generally faster than drawing/animating HTML elements). React and canvas have different update models which makes using them together… interesting. It’s something I’ve done a few times now, so I came in with a gameplan.

The tick counter state lives in React; each tick (watched using React useEffect hooks) triggers new calculations, and the results of those calculations are drawn to the canvas element. Where things get a bit Frankenstein-y is in the mix of text and visual updates. Canvas is not great at text. It doesn’t do text-wrapping by default and unless you’re very careful the text comes out blurry. What I ended up doing was rendering the text as DOM elements overlaid on top of the canvas illustrations. This works, but means a lot of my layout logic is duplicated between canvas (where it is basically just pixel math) and the DOM (where I have access to CSS layout stuff). Everytime I end up in this situation, I think I should just bite the bullet and do everything in canvas. Even though it would mean more work up front it would result in less mental overhead as the project evolves. But, as I think I mentioned, design is about trade-offs, and I didn’t have time to nail down a pure canvas approach this time. So it is a (working!) mish-mash of DOM and canvas. At least the recent innovation of React hooks made the update logic a lot cleaner for me this round.

(That is the very abbreviated version of my thoughts on React and canvas, if you want to hear more about, or have specific questions just let me know (@grantcuster on Twitter or email).)

Check it out

Be sure to try Blip - and read more about its making (including the algorithms) and all things deep learning for anomaly detection in our report.

Read more

Newer
Feb 27, 2020 · post
Older
Feb 5, 2020 · featured post

Latest posts

Nov 15, 2022 · newsletter

CFFL November Newsletter

November 2022 Perhaps November conjures thoughts of holiday feasts and festivities, but for us, it’s the perfect time to chew the fat about machine learning! Make room on your plate for a peek behind the scenes into our current research on harnessing synthetic image generation to improve classification tasks. And, as usual, we reflect on our favorite reads of the month. New Research! In the first half of this year, we focused on natural language processing with our Text Style Transfer blog series.
...read more
Nov 14, 2022 · post

Implementing CycleGAN

by Michael Gallaspy · Introduction This post documents the first part of a research effort to quantify the impact of synthetic data augmentation in training a deep learning model for detecting manufacturing defects on steel surfaces. We chose to generate synthetic data using CycleGAN,1 an architecture involving several networks that jointly learn a mapping between two image domains from unpaired examples (I’ll elaborate below). Research from recent years has demonstrated improvement on tasks like defect detection2 and image segmentation3 by augmenting real image data sets with synthetic data, since deep learning algorithms require massive amounts of data, and data collection can easily become a bottleneck.
...read more
Oct 20, 2022 · newsletter

CFFL October Newsletter

October 2022 We’ve got another action-packed newsletter for October! Highlights this month include the re-release of a classic CFFL research report, an example-heavy tutorial on Dask for distributed ML, and our picks for the best reads of the month. Open Data Science Conference Cloudera Fast Forward Labs will be at ODSC West near San Fransisco on November 1st-3rd, 2022! If you’ll be in the Bay Area, don’t miss Andrew and Melanie who will be presenting our recent research on Neutralizing Subjectivity Bias with HuggingFace Transformers.
...read more
Sep 21, 2022 · newsletter

CFFL September Newsletter

September 2022 Welcome to the September edition of the Cloudera Fast Forward Labs newsletter. This month we’re talking about ethics and we have all kinds of goodies to share including the final installment of our Text Style Transfer series and a couple of offerings from our newest research engineer. Throw in some choice must-reads and an ASR demo, and you’ve got yourself an action-packed newsletter! New Research! Ethical Considerations When Designing an NLG System In the final post of our blog series on Text Style Transfer, we discuss some ethical considerations when working with natural language generation systems, and describe the design of our prototype application: Exploring Intelligent Writing Assistance.
...read more
Sep 8, 2022 · post

Thought experiment: Human-centric machine learning for comic book creation

by Michael Gallaspy · This post has a companion piece: Ethics Sheet for AI-assisted Comic Book Art Generation I want to make a comic book. Actually, I want to make tools for making comic books. See, the problem is, I can’t draw too good. I mean, I’m working on it. Check out these self portraits drawn 6 months apart: Left: “Sad Face”. February 2022. Right: “Eyyyy”. August 2022. But I have a long way to go until my illustrations would be considered professional quality, notwithstanding the time it would take me to develop the many other skills needed for making comic books.
...read more
Aug 18, 2022 · newsletter

CFFL August Newsletter

August 2022 Welcome to the August edition of the Cloudera Fast Forward Labs newsletter. This month we’re thrilled to introduce a new member of the FFL team, share TWO new applied machine learning prototypes we’ve built, and, as always, offer up some intriguing reads. New Research Engineer! If you’re a regular reader of our newsletter, you likely noticed that we’ve been searching for new research engineers to join the Cloudera Fast Forward Labs team.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Notebook

ASR with Whisper

Explore the capabilities of OpenAI's Whisper for automatic speech recognition by creating your own voice recordings!
https://colab.research.google.com/github/fastforwardlabs/whisper-openai/blob/master/WhisperDemo.ipynb
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com

Cloudera Fast Forward Labs

Making the recently possible useful.

Cloudera Fast Forward Labs is an applied machine learning research group. Our mission is to empower enterprise data science practitioners to apply emergent academic research to production machine learning use cases in practical and socially responsible ways, while also driving innovation through the Cloudera ecosystem. Our team brings thoughtful, creative, and diverse perspectives to deeply researched work. In this way, we strive to help organizations make the most of their ML investment as well as educate and inspire the broader machine learning and data science community.

Cloudera   Blog   Twitter

©2022 Cloudera, Inc. All rights reserved.