Blog

Dec 9, 2015 · interview

Fashion Goes Deep: Data Science at Lyst

image

On November 16, 2015, Lyst, an online fashion marketplace based in the United Kingdom, launched its first advertising campaign. Featuring a series of ironic headlines (one simply says “Rip-off”) etched over beautiful images, the campaign emphasizes the company’s identity as a “challenger brand,” whose success “has been driven by marrying insights from data science with the emotional nature of fashion.” (CEO Chris Morton) 

Lyst provides fashion consumers with a central platform where they can mix and match millions of products from 11,500 different brands. In this context, data science serves as a virtual personal shopper, recommending products to users based upon insights from their behavior using the site. One might think these recommendations are powered by collaborative filtering, but the world of fashion is far too transient and fickle to support data models matching similar users. Matrices are sparse and inventory drains rapidly (consider flash sales sites like Gilt). Instead, recommendation algorithms align consumer behavior with product features. And as the world of fashion is one dominated by image and appearance, fashion data science has a lot to do with image analysis. 

We interviewed Eddie Bell, Lyst’s lead data scientist, about his team’s current efforts to use deep learning to analyze images and personalize recommendations to consumers. We talked about his past, his team’s present, and the fashion industry’s future. 

What is Lyst and what is its business model?

Lyst is a London-based online fashion marketplace similar to companies like Asos or Net-A-Porter. We don’t own any fashion products ourselves, but resell them on behalf of affiliated brands and retailers. We actually started off as a purely affiliate marketplace, where retailers rewarded us for driving web traffic to their sites. We scraped the web for semi-structured information about fashion products and then applied machine learning to present this information cleanly and elegantly for consumers from one central website. To make this experience more seamless, we eventually added an our own online checkout system where users could build a look combining different brands and then purchase everything directly from Lyst. We then send transaction information to individual retailers for shipping.

What’s your background and how did you end up working in fashion?

I studied machine learning and pure mathematics in graduate school, but was always drawn to applied and practical problems, to the possibility of building systems that seemed like magic to end users. I started my career working on financial trading systems, but was intrigued by the creativity of working with image data, which comprises the bulk of data we work with in the fashion and retail space. I’ve been at Lyst for the past three years.

How does your data team provide personalized product recommendations to Lyst users?

We started off with a collaborative filtering approach, basing recommendations to given user on items liked and purchased by similar users in the past. Unfortunately, this didn’t work well for our data set. As Lyst sells 11 million items that constantly go in and out of stock, our item-user interaction matrix wasn’t dense (i.e. where most elements are nonzero) and we could never be sure that a given item would be available for the next user.

So we needed a better approach. My colleague Maciej Kula lead research and engineering efforts to build a new recommendation model called LightFM that incorporates item and user metadata into a matrix factorization algorithm. LightFM represents each user and item as the sum of the latent representations of their features, allowing recommendations to generalize to new items (via item features) and to new users (via user features). If a registered Lyst user likes ten items with certain features, we recommend other items with similar features.

We also modulate the frequency and density of recommendations to match user engagement with the site. For example, if a user signs in and looks at three different styles of Wellington boots, we’ll serve up multiple images of similar boots during that browsing session and quickly decay the frequency in future sessions.

image

What about recommendations for consumers who don’t have a registered Lyst account?

We deal with this often given traffic to our site from Google advertisements or search results. Demographic segmentation across brands gives us a good place to start: if a user lands on a Gucci bag, we have a general idea that he or she is interested in luxury products, so we won’t show Zara street wear. Ultimately, however, the value of this social, demographic segmentation is limited. Someone who likes Gucci bags may prefer Prada dresses and Jimmy Choo shoes – one key value at Lyst is that we can aggregate recommendations to curate taste across different brands.

What machine learning techniques do you use to recognize features in fashion product images?

One of the key challenges in image processing for fashion is to recognize duplicates of the same item, as you don’t want to serve multiple images of the same product in different contexts (one Gucci dress on or off the model) or colors (a Uniqlo shirt in white, black, or orange) to the same user. Old school visual processing techniques analyzed images to look for areas of gradient change, which often correlated with product features. The algorithms converted points of interest into a vector, and we compared vectors from different images to find matches to signal duplicates. For example, we could match a picture of a woman wearing an earring with an image of that earring on a blank background. This approach worked for certain stark patterns or shapes, but didn’t perform well with features like the local texture of a pair of blue jeans. They were also entirely unsupervised, so we couldn’t fine tune them.

That was the old school approach. Have you started using deep learning methods for image analysis?  

We’re actually in the process of training the Lyst convolutional neural network now, and predict it will provide a massive boost in accuracy and performance on feature detection and product duplicate identification. Employing supervised learning, we’re training the deep representation model on 5 million labeled images and fine tuning the network to boost accuracy. Our labeling team combines workers from Amazon’s Mechanical Turk, remote contract moderators, and internal full-time moderators. We have a consensus process whereby each moderator gets a score proportional to their agreement with other moderators. The higher a score a moderator has, the more weight he/she has. At Lyst, we have a lot of models that use the human-in-the-loop methodology. Ultimately, our consensus process becomes encoded as intelligence in the model and we need fewer moderators to have confident labels.

We are building a network that has one “deep” representation of each product, metaphorically the network’s master concept of a given product. Like a natural language concept, we need a many-to-one as opposed to a one-to-one relationship between images and representations. Take a Gucci dress as an example. The Lyst data set may have an image of the dress on the hanger in the store, on the runway, on a static model, on people on the streets, or even written product descriptions of varying length. We want to use all this data to create one representation: “Gucci dress.”

What’s one challenge you’ve encountered while building your deep learning model and how did you address it?  

One challenge is that we have different quantities of text and image data for different items. To address this, we found additional fashion data from the internet, and are playing with the idea of synthetically generating labeled data to train our network. This hack was inspired by talks I saw at the 2015 ICLR (the International Conference on Learning Representations) in San Diego. As it can be cost prohibitive for small research groups to acquire data sets with millions of labeled images, people have started to synthetically increase the size of their data sets through multisampling (where each image is cropped in multiple ways, or flipped horizontally and vertically) or introducing noise into the input every time a piece of data is shown to the network (which improves performance on rotated or zoomed in images).

Another technique we’re using to improve duplicate detection is called a region-based convolutional neural network, or R-CNN. We try to estimate bounding boxes around items or features in images to isolate garments or parts of items instead of looking at the images globally.

When we built Pictograph, our deep learning prototype, we chose to manage low confidence labels by leveraging semantic relationships in WordNet, which is the backbone for ImageNet. Are you employing anything similar to manage low confidence labels in your network?

Yes, we do use analogous techniques. We have a fashion ontology affiliated with the Lyst network. This forms a semantic tree with men and women at the top of the tree, followed by item types (e.g. shirts, shoes, or jewelry) as branches, and specific items  (e.g. leather wedge sandals) as leaves. The network is trained on each individual, specific level as well as the full path through the tree. So if there is low confidence in classifying an item as “leather wedge sandal,” there may be higher confidence in saying it’s a ladies shoe, not a men’s suit. Our human in the loop consensus method also helps train the network across these semantic lines.

What will the fashion industry will look like in the future?

The key question is whether traditional retailers will incorporate technology into operations and strategy or derive value from data and technologies through partnerships with companies like Lyst. Our partnership team is working hard to educate high-end, traditional brands of the value modern data science techniques can provide to their business. A key obstacle is their fear that they would lose control over their branding strategy: a high-end retailer like Prada doesn’t want its image mixed up with H&M or Zara on a central platform like Lyst. But we’ve seen tremendous adoption by consumers who like to have their own control to mix and match products and create their own look. With capabilities like deep learning really taking off, we’re excited to see what comes next!

Read more

Newer
Dec 15, 2015 · post
Older
Nov 17, 2015 · post

Latest posts

Nov 15, 2022 · newsletter

CFFL November Newsletter

November 2022 Perhaps November conjures thoughts of holiday feasts and festivities, but for us, it’s the perfect time to chew the fat about machine learning! Make room on your plate for a peek behind the scenes into our current research on harnessing synthetic image generation to improve classification tasks. And, as usual, we reflect on our favorite reads of the month. New Research! In the first half of this year, we focused on natural language processing with our Text Style Transfer blog series.
...read more
Nov 14, 2022 · post

Implementing CycleGAN

by Michael Gallaspy · Introduction This post documents the first part of a research effort to quantify the impact of synthetic data augmentation in training a deep learning model for detecting manufacturing defects on steel surfaces. We chose to generate synthetic data using CycleGAN,1 an architecture involving several networks that jointly learn a mapping between two image domains from unpaired examples (I’ll elaborate below). Research from recent years has demonstrated improvement on tasks like defect detection2 and image segmentation3 by augmenting real image data sets with synthetic data, since deep learning algorithms require massive amounts of data, and data collection can easily become a bottleneck.
...read more
Oct 20, 2022 · newsletter

CFFL October Newsletter

October 2022 We’ve got another action-packed newsletter for October! Highlights this month include the re-release of a classic CFFL research report, an example-heavy tutorial on Dask for distributed ML, and our picks for the best reads of the month. Open Data Science Conference Cloudera Fast Forward Labs will be at ODSC West near San Fransisco on November 1st-3rd, 2022! If you’ll be in the Bay Area, don’t miss Andrew and Melanie who will be presenting our recent research on Neutralizing Subjectivity Bias with HuggingFace Transformers.
...read more
Sep 21, 2022 · newsletter

CFFL September Newsletter

September 2022 Welcome to the September edition of the Cloudera Fast Forward Labs newsletter. This month we’re talking about ethics and we have all kinds of goodies to share including the final installment of our Text Style Transfer series and a couple of offerings from our newest research engineer. Throw in some choice must-reads and an ASR demo, and you’ve got yourself an action-packed newsletter! New Research! Ethical Considerations When Designing an NLG System In the final post of our blog series on Text Style Transfer, we discuss some ethical considerations when working with natural language generation systems, and describe the design of our prototype application: Exploring Intelligent Writing Assistance.
...read more
Sep 8, 2022 · post

Thought experiment: Human-centric machine learning for comic book creation

by Michael Gallaspy · This post has a companion piece: Ethics Sheet for AI-assisted Comic Book Art Generation I want to make a comic book. Actually, I want to make tools for making comic books. See, the problem is, I can’t draw too good. I mean, I’m working on it. Check out these self portraits drawn 6 months apart: Left: “Sad Face”. February 2022. Right: “Eyyyy”. August 2022. But I have a long way to go until my illustrations would be considered professional quality, notwithstanding the time it would take me to develop the many other skills needed for making comic books.
...read more
Aug 18, 2022 · newsletter

CFFL August Newsletter

August 2022 Welcome to the August edition of the Cloudera Fast Forward Labs newsletter. This month we’re thrilled to introduce a new member of the FFL team, share TWO new applied machine learning prototypes we’ve built, and, as always, offer up some intriguing reads. New Research Engineer! If you’re a regular reader of our newsletter, you likely noticed that we’ve been searching for new research engineers to join the Cloudera Fast Forward Labs team.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Notebook

ASR with Whisper

Explore the capabilities of OpenAI's Whisper for automatic speech recognition by creating your own voice recordings!
https://colab.research.google.com/github/fastforwardlabs/whisper-openai/blob/master/WhisperDemo.ipynb
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com

Cloudera Fast Forward Labs

Making the recently possible useful.

Cloudera Fast Forward Labs is an applied machine learning research group. Our mission is to empower enterprise data science practitioners to apply emergent academic research to production machine learning use cases in practical and socially responsible ways, while also driving innovation through the Cloudera ecosystem. Our team brings thoughtful, creative, and diverse perspectives to deeply researched work. In this way, we strive to help organizations make the most of their ML investment as well as educate and inspire the broader machine learning and data science community.

Cloudera   Blog   Twitter

©2022 Cloudera, Inc. All rights reserved.