Blog

Mar 10, 2017 · interview

Trees, Layers, and Speed: Talking Optimization with Patrick Hayes

In a modern twist on Claude Shannon’s Theseus, SigOpt explains optimization by teaching a mouse to solve a randomly generated maze

The learning of machine learning refers to the process of updating and tuning the parameters of a model. For example, if we take the function f(x) = ax^2 + bx + c, learning would mean to change the values of a, b, and c so that our function does a better job describing our data.

But what if we zoom out and focus on higher level model properties? Instead of learning coefficients for a function to best fit our data, is it possible to learn how many layers to put in a neural network or how many trees to put in a random forest model?

The machine learning community calls this hyperparameter optimization. There are a few competing optimization approaches available today, and SigOpt, a San Francisco startup, is the first company we know of that offers a platform to automate optimization work for just about any machine learning project and pipeline.

SigOpt has an excellent technical blog on this topic if you’d like to dig in further. To start you off, we recently interviewed SigOpt Co-founder and VP of Engineering Patrick Hayes. Keep reading for highlights!

You studied math and computer science at the University of Waterloo, which has a unique Co-op Program where students alternate between four-month study terms at university and four-month work terms in business, industry, or government. How did this experience shape your interests?

I did six internships during my undergraduate degree, at companies like Blackberry, Sybase, Facebook, and Bloomberg, so left school with a good level of practical experience under my belt While many of these companies were large, I had some experience working on a small team at Wish, a mobile commerce company, where even as a junior engineer I was able to work on topic analysis using a Naive Bayes bag-of-words model. While many data scientists come from statistics or physics backgrounds, I studied pure math, which sculpted the way I approach problems. That said, it’s my training in computer science, not math, that has really impacted my practical day-to-day work. I think people from different intellectual backgrounds can succeed in data science. At SigOpt, we combine expertise in applied math, full stack engineering, and machine learning, and love having such diversity to attack problems from different directions.

Did you prefer working in a big or small company?

The different environments develop a different set of skills. I liked working at a smaller company like Wish because it gave me the opportunity to work on bigger projects. I had more ownership over my work and was able to wear different hats. I liked having a wide array of experiences early in my career to learn what I like to do, as well as what I don’t like to do, building skills to make big impact later in my career. That said, big companies offer some nice benefits, like having more support structures and processes and the ability and time to do a deep dive into a given project.

Why did you found SigOpt and what’s it been like working with your investors?

My co-founder, Scott Clark, researched hyperparameter optimization in his PhD work at Cornell. Hyperparameters are the higher-level properties of a model such as its complexity (e.g., number of layers in a neural network or trees in a random forest) or learning rate, and are usually fixed before model training begins rather than tweaked over time to improve performance. I had a lot of respect for Scott, having worked with him in the past, and felt turning this research into a product could make differences across many industries and problem sets. We started in Y Combinator at the beginning of 2015, and later received investment from Andreessen Horowitz. Y Combinator helped us stay focused in the early days. As Scott and I are both technical founders, they helped us learn how to run a business: our ability pitch to investors or sell to customers was like night and day after a few months! The Y Combinator network, including fellow cohort founders and alumni, was also critical to our early success. Andreessen Horowitz took us to the next step by introducing us to prospects, helping us hire new talent, and helping us grow into executives. I’ve personally learned a lot in these experiences. I came in as an engineer and was in my comfort zone writing code and solving problems. But to build a business, I had to spend 14 hours a day doing stuff I used to be bad at, like sales, marketing, hiring, and managing! It’s great to watch our team progressing on these activities.

How does the product work?

SigOpt is an optimization platform for any machine learning pipeline: our API includes various optimization technologies for hyperparameters, feature selection, machine learning model selection, or even algorithmic trading strategies.The goal is to enable our users to have the best version of their model as possible. We recently published a paper demonstrating that our methods outperform other common approaches to this problem, like random search and grid search.

SigOpt focuses on Bayesian optimization ensembles. What does that mean and how do Bayesian techniques support parameter optimization?

There’s a solid 2015 paper that gives an introduction to Bayesian optimization. Mathematically, optimization solves the problem of finding a global maximum (or minimum) for some function f. Global is important here because we don’t get stuck where things may look good locally: we only need to observe the function f through unbiased point-wise observations. What makes the techniques Bayesian is that we start with a prior belief over possible responses of the objective functions we’re optimizing for, and sequentially refine this model as data are observed, i.e., we update our posterior. We can then leverage the uncertainty baked into our Bayesian posterior to guide further exploration and exploitation to optimize the function: exploration is learning more about areas where we have greater uncertainty, and exploitation is going deeper in areas where we are confident the model performs well.

The term ensemble refers to the fact that there are a variety of ways to do black-box optimization. As with all applied AI, it’s a balance between different factors: some techniques are accurate but slow, others are less accurate but fast; some perform better on functions with many parameters and others on functions with fewer parameters. Our product does the work of selecting which strategy will perform best for the problem at hand so our customers don’t have to select which optimizer to use.

What are some interesting use cases?

We’ve seen pretty strong adoption in algorithmic trading to help quants tune their models faster when they discover a new feature or as the data changes. There have also been a few interesting physical use cases, like optimizing the chemical composition in shaving cream or the process parameters for making beer. It can be expensive for companies to make something new that might fail, so they really need to get the best version as soon as possible. These experiments are often solved by trial and error, but optimization software helps make domain experts more efficient.

Can SigOpt help organizations with long-term operation and maintenance of models? A common challenge companies face is that the models their data scientists build are only relevant at one point in time. Do you have tips for productionalization of models in your experience as an engineer?

It’s true that models are built and optimized for a given setting, and as your data changes or you add more features, they quickly become outdated. SigOpt can help upkeep model performance even in the face of a changing landscape or ongoing product development. We want to encourage people to treat optimization as a first-class citizen in their machine learning and AI pipeline. Our algorithmic trading customers do this well, as they need an automated process to get their model ready to perform and only have 16 hours between trading hours to do this. Necessity, as they say, is the mother of invention.

How will tools like SigOpt change the work and skill set of data scientists over the next few years? Will it be possible to automate feature engineering, for example?

I don’t think feature engineering will be automated any time in the near future. Hyperparameter optimization is another example of letting machines do what machines do best, and enabling humans to focus on more creative and critical thinking activities. Our brains aren’t made to optimize 20-dimensional functions in our head, but they are made to explore aspects of a data set to frame a problem worth solving. I’d advise aspiring data scientists to make sure they understand the full machine learning pipeline, from collection and processing, through model building, through engineering and shipping the model. You can’t be satisfied with just part of the process. Production machine learning requires a different skill set than theoretical data science. There’s certain ways of thinking and behaving you can only learn by doing, by getting your hands dirty on the job, by working with real, messy data and making a model scale on real infrastructure.

Optimization can be a powerful tool for complex neural network architecture

What book or article has had the greatest influence on you recently?

Susan Fowler’s article about her experience at Uber. It’s important we continue to support diversity in the tech industry, to make sure everyone in the field feels comfortable and supported in their work environment. We think a lot about culture at SigOpt, and all work to cultivate an environment of respect that can support a diverse team.

Read more

Newer
Mar 13, 2017 · talk slides
Older
Mar 9, 2017 · whitepaper

Latest posts

Nov 15, 2022 · newsletter

CFFL November Newsletter

November 2022 Perhaps November conjures thoughts of holiday feasts and festivities, but for us, it’s the perfect time to chew the fat about machine learning! Make room on your plate for a peek behind the scenes into our current research on harnessing synthetic image generation to improve classification tasks. And, as usual, we reflect on our favorite reads of the month. New Research! In the first half of this year, we focused on natural language processing with our Text Style Transfer blog series.
...read more
Nov 14, 2022 · post

Implementing CycleGAN

by Michael Gallaspy · Introduction This post documents the first part of a research effort to quantify the impact of synthetic data augmentation in training a deep learning model for detecting manufacturing defects on steel surfaces. We chose to generate synthetic data using CycleGAN,1 an architecture involving several networks that jointly learn a mapping between two image domains from unpaired examples (I’ll elaborate below). Research from recent years has demonstrated improvement on tasks like defect detection2 and image segmentation3 by augmenting real image data sets with synthetic data, since deep learning algorithms require massive amounts of data, and data collection can easily become a bottleneck.
...read more
Oct 20, 2022 · newsletter

CFFL October Newsletter

October 2022 We’ve got another action-packed newsletter for October! Highlights this month include the re-release of a classic CFFL research report, an example-heavy tutorial on Dask for distributed ML, and our picks for the best reads of the month. Open Data Science Conference Cloudera Fast Forward Labs will be at ODSC West near San Fransisco on November 1st-3rd, 2022! If you’ll be in the Bay Area, don’t miss Andrew and Melanie who will be presenting our recent research on Neutralizing Subjectivity Bias with HuggingFace Transformers.
...read more
Sep 21, 2022 · newsletter

CFFL September Newsletter

September 2022 Welcome to the September edition of the Cloudera Fast Forward Labs newsletter. This month we’re talking about ethics and we have all kinds of goodies to share including the final installment of our Text Style Transfer series and a couple of offerings from our newest research engineer. Throw in some choice must-reads and an ASR demo, and you’ve got yourself an action-packed newsletter! New Research! Ethical Considerations When Designing an NLG System In the final post of our blog series on Text Style Transfer, we discuss some ethical considerations when working with natural language generation systems, and describe the design of our prototype application: Exploring Intelligent Writing Assistance.
...read more
Sep 8, 2022 · post

Thought experiment: Human-centric machine learning for comic book creation

by Michael Gallaspy · This post has a companion piece: Ethics Sheet for AI-assisted Comic Book Art Generation I want to make a comic book. Actually, I want to make tools for making comic books. See, the problem is, I can’t draw too good. I mean, I’m working on it. Check out these self portraits drawn 6 months apart: Left: “Sad Face”. February 2022. Right: “Eyyyy”. August 2022. But I have a long way to go until my illustrations would be considered professional quality, notwithstanding the time it would take me to develop the many other skills needed for making comic books.
...read more
Aug 18, 2022 · newsletter

CFFL August Newsletter

August 2022 Welcome to the August edition of the Cloudera Fast Forward Labs newsletter. This month we’re thrilled to introduce a new member of the FFL team, share TWO new applied machine learning prototypes we’ve built, and, as always, offer up some intriguing reads. New Research Engineer! If you’re a regular reader of our newsletter, you likely noticed that we’ve been searching for new research engineers to join the Cloudera Fast Forward Labs team.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Notebook

ASR with Whisper

Explore the capabilities of OpenAI's Whisper for automatic speech recognition by creating your own voice recordings!
https://colab.research.google.com/github/fastforwardlabs/whisper-openai/blob/master/WhisperDemo.ipynb
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com

Cloudera Fast Forward Labs

Making the recently possible useful.

Cloudera Fast Forward Labs is an applied machine learning research group. Our mission is to empower enterprise data science practitioners to apply emergent academic research to production machine learning use cases in practical and socially responsible ways, while also driving innovation through the Cloudera ecosystem. Our team brings thoughtful, creative, and diverse perspectives to deeply researched work. In this way, we strive to help organizations make the most of their ML investment as well as educate and inspire the broader machine learning and data science community.

Cloudera   Blog   Twitter

©2022 Cloudera, Inc. All rights reserved.