Apr 18, 2018 · post

Introducing SciFi

Today we are launching a mini-site featuring our collection of short stories inspired by new developments in machine learning. Beginning with our fourth report, we started including a science-fiction story along with the technical and strategic overviews that are the bulk of each report. Using these stories, we can look at the technologies we profile from a different angle and explore their cultural implications.

The SciFi site includes the four stories we have included so far. We will continue to commission and publish a story with each new report, and we are working on plans to open the process up to a wider range of voices.

Below you’ll find some background and interpretation for each of the stories. I can speak authoritatively about the intent for the two that I wrote, but for the others, please note that these are my interpretations.

Mars Terraform Expansion S-217

A network graph representation of different opinions on the Mars TerraformExpansion. Eco-conservative views are towards the left and expansionists towardthe right.

The first story we published is the least narrative of the bunch. The report focused on summarization as a gateway to making text quantifiable (and therefore computable). In the story, I focused on imagining the kinds of interfaces that capability could enable. The system in the story is able to identify the key points of different articles and synthesize them into a coherent summary. It is also able to identify their political orientation and place them in relation to one another.

I continue to be fascinated by the summarization and mapping possibilities that natural language processing technologies could open up. I’m especially interested by experiments in using deep learning and dimensionality reduction techniques to create visualizations of the relationships between different concepts (see for example, Sepand Ansari’s Encartopedia). If I were rewriting the story today, I think I would include some ambiguity about whether the system’s representations were truly reliable. For a much more comprehensive look at how a similar information processing system could be both extremely useful and also dangerous, check out Nick Harkaway’s mind-bending novel Gnomon.

BayesHead 5000

The letterhead for the fictional Monte Carlo Corporation. It features two dice where the dots spell out an “M” and a “C”.

In “BayesHead 5000”, Liam Sweeney imagines a customer service letter from the future. The report it appears in focused on probabilistic programming, which makes advanced statistical techniques more accessible to a broad programming audience. The story imagines a future where those techniques are made personally available via a brain implant (for a price).

A big part of the fun of the story is piecing together the kind of world it takes place in through the euphemistic sales-speak of the service representative. It’s a good reminder that whatever fantastical technologies we may invent, we’ll still probably relate to one another in the same annoyingly convoluted ways.

The story is inspired by George Saunders’ “I Can Speak”. Saunders is an expert at revealing (often through an unreliable narrator) the absurdity of conventions and systems that we take for granted. Hopefully, stories like these help us remember not to view the systems that develop around new capabilities as inevitable, but rather as things we are all involved in making – things for which we are collectively responsible.

The Definition of Success

A table showing the ship computer’s decision making process. The success prediction is shown to be most influenced by the potential profit for the mission.

The Definition of Success appears in our report on interpretability, which focuses on techniques for making deep learning models more interpretable. The story takes its inspiration very directly from the film Alien. Alien is already the story of uninterpretable AI (the ship computer, Mother). My main additions were imagining an interpretability interface, similar to the prototype we built to accompany the report, that revealed the features underlying the ship’s decisions. Based on this information, the protagonist is able to adjust the AI to provide more survival-oriented advice.

Like “BayesHead 5000”, the story highlights the degree to which larger economic and political systems direct the use of technology. It is not the story of an AI gone rogue. The ship has simply inherited the value system of its owner, Space Exploitation Corp. It is working as designed. Hopefully the story makes the point that interpretability is necessary not just in Matrix-style machine revolt situations, but also to make sure human-controlled systems are not acting contrary to basic decency.

Customers Who Haven’t Read Kafka Also Like

A computer in a high-rise office buildings. Two plants surround the computer.Its screen reads “Customers Who Haven’t Read Kafka Also Like”.

The most recent story, by Kent Szlauderbach, is inspired by Franz Kafka’s “An Imperial Message”. The story appeared in our report on Semantic Recommendations, which looked at building systems that used deep learning to consider the content of an item when making recommendations. Kafka is another writer interested in how we relate to the systems that surround us, making his work an excellent starting point for examining how we relate to algorithmic recommendations.

“The message,” in both the Kafka story and this one, can never be delivered. The story invokes the possibility that we could capture and quantify the true meaning of a story (“Say the most powerful computer in the nation sends a message, in a fatal error, containing the story’s true meaning to you, a modest user”), only to keep withdrawing that message outside our reach. For me, there’s a Zen koan thing happening, where the desire to pin down a fixed meaning is repeatedly denied, and through that I’m forced to reflect on why that denial makes me uncomfortable. The desire to quantify and classify on a large scale is a driving force behind the technology we develop. This story helps me recognize that desire in myself and think about its limits.

A continuing conversation

One thing I really like about the last three stories all having pretty direct influences on which they are based is that it shows how stories continue to be relevant in helping us think through the technology and systems that surround us. They’re part of a conversation that stretches back to (at least) Kafka writing at the beginning of the 20th century. As Annalee Newitz and Charlie Jane Anders discuss on the third episode of their excellent Our Opinions Are Correct podcast, sci-fi books that stand the test of time continue to be relevant not because of the precision of their predictions, but because they meaningfully engage with how we relate to technology as individuals and as a society. These stories are a part of that larger conversation.

Read more

Apr 25, 2018 · newsletter
Apr 10, 2018 · post

Latest posts

Sep 22, 2021 · post

Automatic Summarization from TextRank to Transformers

by Melanie Beck · Automatic summarization is a task in which a machine distills a large amount of data into a subset (the summary) that retains the most relevant and important information from the whole. While traditionally applied to text, automatic summarization can include other formats such as images or audio. In this article we’ll cover the main approaches to automatic text summarization, talk about what makes for a good summary, and introduce Summarize. – a summarization prototype we built that showcases several automatic summarization techniques. more
Sep 21, 2021 · post

Extractive Summarization with Sentence-BERT

by Victor Dibia · In extractive summarization, the task is to identify a subset of text (e.g., sentences) from a document that can then be assembled into a summary. Overall, we can treat extractive summarization as a recommendation problem. That is, given a query, recommend a set of sentences that are relevant. The query here is the document, relevance is a measure of whether a given sentence belongs in the document summary. How we go about obtaining this measure of relevance varies (a common dilemma for any recommendation system). more
Sep 20, 2021 · post

How (and when) to enable early stopping for Gensim's Word2Vec

by Melanie Beck · The Gensim library is a staple of the NLP stack. While it primarily focuses on topic modeling and similarity for documents, it also supports several word embedding algorithms, including what is likely the best-known implementation of Word2Vec. Word embedding models like Word2Vec use unlabeled data to learn vector representations for each token in a corpus. These embeddings can then be used as features in myriad downstream tasks such as classification, clustering, or recommendation systems. more
Jul 7, 2021 · post

Exploring Multi-Objective Hyperparameter Optimization

By Chris and Melanie. The machine learning life cycle is more than data + model = API. We know there is a wealth of subtlety and finesse involved in data cleaning and feature engineering. In the same vein, there is more to model-building than feeding data in and reading off a prediction. ML model building requires thoughtfulness both in terms of which metric to optimize for a given problem, and how best to optimize your model for that metric! more
Jun 9, 2021 ·

Deep Metric Learning for Signature Verification

By Victor and Andrew. TLDR; This post provides an overview of metric learning loss functions (constrastive, triplet, quadruplet, and group loss), and results from applying contrastive and triplet loss to the task of signature verification. A complete list of the posts in this series is outlined below: Pretrained Models as Baselines for Signature Verification -- Part 1: Deep Learning for Automatic Offline Signature Verification: An Introduction Part 2: Pretrained Models as Baselines for Signature Verification Part 3: Deep Metric Learning for Signature Verification In our previous blog post, we discussed how pretrained models can serve as strong baselines for the task of signature verification. more
May 27, 2021 · post

Pretrained Models as a Strong Baseline for Automatic Signature Verification

By Victor and Andrew. Figure 1. Baseline approach for automatic signature verification using pretrained models TLDR; This post describes how pretrained image classification models can be used as strong baselines for the task of signature verification. The full list of posts in the series is outlined below: Pretrained Models as Baselines for Signature Verification -- Part 1: Deep Learning for Automatic Offline Signature Verification: An Introduction Part 2: Pretrained Models as Baselines for Signature Verification Part 3: Deep Metric Learning for Signature Verification As discussed in our introductory blog post, offline signature verification is a biometric verification task that aims to discriminate between genuine and forged samples of handwritten signatures. more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)


In-depth guides to specific machine learning capabilities


Machine learning prototypes and interactive notebooks


A usable library for question answering on large datasets.

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.

Interpretability Revisited: SHAP and LIME

Explore how to use LIME and SHAP for interpretability.


Cloudera Fast Forward is an applied machine learning reseach group.
Cloudera   Blog   Twitter