Blog

Mar 22, 2017 · talk slides

Taking Prophet for a Spin

Facebook recently released Prophet, a general purpose time series forecasting package with both Python and R interfaces.

Python and R already have plenty of time series forecasting options, so why is Prophet interesting? It caught our eye because the backend is implemented in Stan, a probabilistic programming language we researched in our most recent report.

This choice means that Prophet offers many of the advantages of the Bayesian approach. In particular, the models have a simple, interpretable structure (seasonality) on which prior analyst knowledge can be imposed, and forecasts include confidence intervals derived from the full posterior distribution, which means they offer a data-driven estimate of risk.

But by keeping the probabilistic programming language in the backend, the choice of Stan becomes an implementation detail to the user, who is probably a data analyst with a time series modeling problem. This user can continue to work entirely in a general purpose language they already know.

In this post, we take Prophet for a spin, exploring its user interface and performance with a couple of datasets.

The model

Prophet implements a general purpose time series model suitable for the kind of data seen at Facebook. It offers piecewise trends, multiple seasonality (day of week, day of year, etc.), and floating holidays.

Prophet frames the time series forecasting problem as a curve-fitting exercise. The dependent variable is a sum of three components: growth, periodic seasonality, and holidays.

Prophet models nonlinear growth using a logistic growth model with a time-varying carrying capacity. It models linear growth using a simple piecewise constant function. Changepoints (where growth rate is allowed to change) are modeled using a vector of rate adjustments, each corresponding to a specific point in time. The rate adjustment variable is modeled using a Laplace distribution with location parameter of 0. Analysts can specify changepoints by providing specific dates or by adjusting the scale parameter associated with the Laplace distribution.

Prophet models periodic seasonality using a standard Fourier series. For yearly and weekly seasonality, the number of approximation terms is 20 and 6 respectively. The seasonal component is smoothed with a normal prior.

Finally, holidays are modeled using an indicator function. The indicator function takes 1 on holidays and is multiplied by a normal smoothing prior.

For both seasonal and holiday priors, analysts can adjust the spread parameter to model how much of the historical seasonal variation is expected in the future.

Using Prophet

The model is specified in a short Stan listing that gets compiled behind the scenes when the Prophet is first installed. The user need never touch the Stan code, and works with Prophet entirely through its Python or R interfaces.

To demonstrate these interfaces, let’s run Prophet on an infamous dataset with extremely strong seasonality: atmospheric carbon dioxide as measured on the Hawaiian volcano of Mauna Loa.

Having prepared a pandas DataFrame maunaloa, running Prophet is just a couple of lines:

m = Prophet()
m.fit(maunaloa)
future = m.make_future_dataframe(periods=120, freq='m')
forecast = m.predict(future)

This code takes a couple of seconds to run and yields the following forecast:

Prophet’s simple model is easily able to detect the strong annual periodicity and long-term upwards trend. Note that the forecast comes with data-driven confidence intervals for free, a crucial advantage of probabilistic programming systems.

Prophet also yields simple, interpretable results for the components (date, day of week, day of year) of the time series decomposition.

Notice the weekly component is much smaller than the other two, and likely mostly noise. This makes sense; global atmospheric chemistry doesn’t vary by day of the week! On the other hand, the yearly component shows the seasonal impact of northern hemisphere vegetation levels on carbon dioxide levels; the levels are higher lower after the summer and higher after winter.

Birth data

Let’s now run Prophet on a more challenging dataset, the number of births in the United States by day of the year. This dataset was analyzed using Gaussian Processes and made famous through its appearance on the cover of Bayesian Data Analysis, Andrew Gelman’s textbook. It’s a dataset with seasonality (both yearly and weekly) and holiday effects.

m = Prophet(changepoint_prior_scale=0.1)
m.fit(birthdates);
future = m.make_future_dataframe(periods=365)
forecast = m.predict(future)

Here we demonstrate Prophet’s ability to automatically detect changepoints by adjusting the changepoint smoothing parameter. Instead of the default value of 0.05, we set the changepoint smoothing parameter to be 0.1. This makes the resulting forecast more flexible and less smooth, but also more susceptible for chasing noise. If we were doing this for real we would of course conduct a formal cross-validation to empirically determine the proper value of this hyperparameter.

Prophet takes about a minute to run on this dataset (black points) and gives the following forecast (blue line). We show here a truncated time series from 1987 to 1990.

We can see the origin of the almost bimodal distribution of the data in the component plots. Prophet finds the strong weekday/weekend variation. We also see the yearly seasonality effects: more births around August to October.

These components are very similar to the those found using Gaussian Processes. That analysis finds spikes in the number of births on specific days during the year. For example, the number of birth is anomalously low on New Year’s day and high on Valentine’s Day. We stopped short of doing this, but these special days could be captured in Prophet as “holidays” by defining a indicator variable series that says whether each date covered by the dataset and forecast was/will be a holiday.

Advantages of Prophet

In our probabilistic programming report we emphasized that the Bayesian approach, made simpler by probabilistic languages like Stan and pymc3, allows developers and statisticians to quantify the probability of all outcomes and not just determine the most likely prediction. The prior and interpretability make the models more practical.

Prophet makes these advantages concrete for a specific use case: forecasting. It makes sensible choices for a general purpose time series modeling function. Some flexibility is sacrificed in the modeling choices, but the trade-off is a great one from the point of view of the intended typical Prophet user. It abstracts away the complexity of working with Stan’s powerful but somewhat eccentric interfaces behind idiomatic Python and R APIs, which makes the system even easier and quicker for data scientists and analysts to use. Prophet is a great example of a robust, user-friendly probabilistic programming product.

Read more

Newer
Mar 25, 2017 · talk slides
Older
Mar 15, 2017 · post

Latest posts

Nov 15, 2022 · newsletter

CFFL November Newsletter

November 2022 Perhaps November conjures thoughts of holiday feasts and festivities, but for us, it’s the perfect time to chew the fat about machine learning! Make room on your plate for a peek behind the scenes into our current research on harnessing synthetic image generation to improve classification tasks. And, as usual, we reflect on our favorite reads of the month. New Research! In the first half of this year, we focused on natural language processing with our Text Style Transfer blog series.
...read more
Nov 14, 2022 · post

Implementing CycleGAN

by Michael Gallaspy · Introduction This post documents the first part of a research effort to quantify the impact of synthetic data augmentation in training a deep learning model for detecting manufacturing defects on steel surfaces. We chose to generate synthetic data using CycleGAN,1 an architecture involving several networks that jointly learn a mapping between two image domains from unpaired examples (I’ll elaborate below). Research from recent years has demonstrated improvement on tasks like defect detection2 and image segmentation3 by augmenting real image data sets with synthetic data, since deep learning algorithms require massive amounts of data, and data collection can easily become a bottleneck.
...read more
Oct 20, 2022 · newsletter

CFFL October Newsletter

October 2022 We’ve got another action-packed newsletter for October! Highlights this month include the re-release of a classic CFFL research report, an example-heavy tutorial on Dask for distributed ML, and our picks for the best reads of the month. Open Data Science Conference Cloudera Fast Forward Labs will be at ODSC West near San Fransisco on November 1st-3rd, 2022! If you’ll be in the Bay Area, don’t miss Andrew and Melanie who will be presenting our recent research on Neutralizing Subjectivity Bias with HuggingFace Transformers.
...read more
Sep 21, 2022 · newsletter

CFFL September Newsletter

September 2022 Welcome to the September edition of the Cloudera Fast Forward Labs newsletter. This month we’re talking about ethics and we have all kinds of goodies to share including the final installment of our Text Style Transfer series and a couple of offerings from our newest research engineer. Throw in some choice must-reads and an ASR demo, and you’ve got yourself an action-packed newsletter! New Research! Ethical Considerations When Designing an NLG System In the final post of our blog series on Text Style Transfer, we discuss some ethical considerations when working with natural language generation systems, and describe the design of our prototype application: Exploring Intelligent Writing Assistance.
...read more
Sep 8, 2022 · post

Thought experiment: Human-centric machine learning for comic book creation

by Michael Gallaspy · This post has a companion piece: Ethics Sheet for AI-assisted Comic Book Art Generation I want to make a comic book. Actually, I want to make tools for making comic books. See, the problem is, I can’t draw too good. I mean, I’m working on it. Check out these self portraits drawn 6 months apart: Left: “Sad Face”. February 2022. Right: “Eyyyy”. August 2022. But I have a long way to go until my illustrations would be considered professional quality, notwithstanding the time it would take me to develop the many other skills needed for making comic books.
...read more
Aug 18, 2022 · newsletter

CFFL August Newsletter

August 2022 Welcome to the August edition of the Cloudera Fast Forward Labs newsletter. This month we’re thrilled to introduce a new member of the FFL team, share TWO new applied machine learning prototypes we’ve built, and, as always, offer up some intriguing reads. New Research Engineer! If you’re a regular reader of our newsletter, you likely noticed that we’ve been searching for new research engineers to join the Cloudera Fast Forward Labs team.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Notebook

ASR with Whisper

Explore the capabilities of OpenAI's Whisper for automatic speech recognition by creating your own voice recordings!
https://colab.research.google.com/github/fastforwardlabs/whisper-openai/blob/master/WhisperDemo.ipynb
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com

Cloudera Fast Forward Labs

Making the recently possible useful.

Cloudera Fast Forward Labs is an applied machine learning research group. Our mission is to empower enterprise data science practitioners to apply emergent academic research to production machine learning use cases in practical and socially responsible ways, while also driving innovation through the Cloudera ecosystem. Our team brings thoughtful, creative, and diverse perspectives to deeply researched work. In this way, we strive to help organizations make the most of their ML investment as well as educate and inspire the broader machine learning and data science community.

Cloudera   Blog   Twitter

©2022 Cloudera, Inc. All rights reserved.