Last Thursday Hilary and I headed to Clarifai’s offices in the Flatiron District to ask CEO Matt Zeiler about using deep learning for image analysis. A few highlights from the interview:
1) The success of a deep learning project depends on the quality of the initial training data set. Deep learning algorithms start by scanning massive data sets to identify features (inputs) that can be correlated with categories (outputs) to make sense of the data. The neural nets behind deep learning are powerful because the individual nodes adjust over time, but they aren’t powerful enough to override the quality of the training data.
2) When analyzing images, deep learning algorithms can pick up not only features correlated with names of things (“this sharp pointed beak maps well to our category bird!”) but also with more abstract concepts like togetherness or romance. Clarifai’s API does not classify all image with two people next to one another as demonstrating “togetherness,” but only those images where people are touching, embracing, and indexing other features of connection.
3) The best way to understand a neural network is to build one on your own, perhaps using an open source resource. As deep learning research is currently a very active academic field, there are lots of options including Theano, Keras, caffe and Torch.
Check out the interview recording and feel free to reach out with questions about whether deep learning is right for you!
More from the Blog
Sep 15 2015
Have you ever wondered what your photos say about how you look at the world and who you are? Your images won’t say much about what types of things you tend to post unless you routinely tag them. Our new toy application, Pictograph, catalogs the objects that make up your Instagram identity. Pictograph analyzes your Instagram photos and creates a visualization, or pictograph, of what you like t...
Sep 24 2015
by — Neural networks are generating a lot of excitement, as they are quickly proving to be a promising and practical form of machine intelligence. At Fast Forward Labs, we just finished a project researching and building systems that use neural networks for image analysis, as shown in our toy application Pictograph. Our companion deep learning report explains this technology in depth and explores ap...
by Mike S.
Jan 29 2019
by — UMAP explorer: an interactive visualization of the MNIST data set We’re in the middle of work on our next report, Learning with Limited Labeled Data, and the accompanying prototype. For the prototype’s front-end we wanted to be able visualize and explore the embedding of a large image data set. Once you get into the tens of thousands of points, this can be a challenge to do in the browser. T...