Today’s post is inspired by a slow-motion recording we captured of a Stirling engine that Ryan, Fast Forward’s General Counsel, just so happened to have lying around our New York City offices. For the non-mechanics among us, a Stirling engine is a heat engine that operates by cyclic compression and expansion of air and other gas at different temperatures; the temperature differential translates heat into mechanical work. The slowmo in the video renders a hypnotic locomotive sound that differs greatly from the mechanic buzz of realtime.
As we put finishing touches on our upcoming deep learning report, we thought we’d provide a few examples of how this technology works using the Stirling engine and some flowers in our offices.
Deep learning uses computational systems called “neural networks” to identify and recognize objects within images and video. Inspired by - but far from identical to - the behavior of neurons in our brains, neural networks are composed of individual, interconnected processing nodes that adapt to new input. Kris Hammond, Chief Scientist at Narrative Science, describes the learning process in Practical Artificial Intelligence for Dummies:
“Each of the connections between the nodes has a weight associated with it that is adjusted during learning. On the input side we might have all the pixel values of an image with output values that stand for a category like “cat” or “house.” If the output determined by the passing of values through these links is not the same as the output value set by the category, each node failing to match sends a signal back indicating that there was an error and the weights on the relevant links must change. Over time, these tiny changes steer the network toward the set of weights that enables the network to correctly assess that a new input is in the appropriate category.”
Clarifai, one of the deep learning companies we survey in our report, includes a user-friendly demo on their website that illustrates how deep learning tags and classifies the objects in images. The demo suggested “motor,” “technology,” “vehicle,” and “equipment” as tags for our Stirling engine, picking up on the engine’s antique character by relating it to images of antique phones and candelabras.
For some objects, Clarfai’s algorithms suggested tags that denote not only what is depicted, but also suggest related semantic fields and contexts (analytic philosophers may think of Frege’s morning and evening star). An image of our orchids yielded identification tags with varying levels of specificity/generality - “flower,” “orchids,” “bouquet” - as well as related terms like “elegant,” “wedding” and “still life.”
Image analysis like this is just the beginning of the applications for deep learning. We expect in the next few years to see things such as apps that take a photo of your meal and tell you how many calories are in it, better speech recognition, even machines that can diagnose disease and detect security issues. We’re excited to explore this in further detail in a webinar next month. Stay tuned for details!
More from the Blog
Aug 5 2015
On March 25, 1909, Wilbur Wright (of the Wright brothers) told a reporter at the Cairo, Illinois bulletin that “no airship will ever fly from New York to Paris.” As with most quotes inherited from the past, people often misinterpret Wright’s quote as reactionary because they read it out of context. He continues: “What limits the flight is the motor. No known motor can run at the requisite spee...
Aug 14 2015
We like to hold fast to the myth of the individual creative genius as the source of the world’s most impactful scientific revolutions or disruptive innovations. But it’s consoling to recall how Isaac Newton consoled his rival Robert Hooke: “If I’ve seen further than others, it was by standing on the shoulders of giants.” This is French how painter Nicolas Poussin represents Cedalion providing...
Jan 29 2019
by — UMAP explorer: an interactive visualization of the MNIST data set We’re in the middle of work on our next report, Learning with Limited Labeled Data, and the accompanying prototype. For the prototype’s front-end we wanted to be able visualize and explore the embedding of a large image data set. Once you get into the tens of thousands of points, this can be a challenge to do in the browser. T...