Why Now? Some Preconditions for Technology Innovations

Aug 14 2015

We like to hold fast to the myth of the individual creative genius as the source of the world’s most impactful scientific revolutions or disruptive innovations. But it’s consoling to recall how Isaac Newton consoled his rival Robert Hooke: “If I’ve seen further than others, it was by standing on the shoulders of giants.”

This is French how painter Nicolas Poussin represents Cedalion providing sight to the blind Orion, a mythological pair associated with each generation’s progress over its predecessors.

Fast Forward Labs clients frequently ask why the algorithms we explore in our reports have all of the sudden become so important. While the answers to these questions are extremely complex, we devote part of each report to explaining the history of a given technique and suggesting what changes had to take place to render each innovation possible and practical. 

Take deep learning. A basic form of the neural nets that underly deep learning algorithms have been around since the 1950s. In 1958, Frank Rosenblatt told the New York Times that his Perceptron would be the beginning of computers that could walk, talk, see, write, reproduce themselves, and be conscious of existence. As fifty years have passed without generating sentient machines - and in spite of the fear of the so-called “intelligence explosion” that dominated discussions at Effective Altruism - we know these statements were exaggerated given the state of technology at that time. But three recent developments have caused resurgent interest in and applicability of neural nets.

First, there’s been strong progress in our theoretical understanding of the networks themselves. Second, GPU (Graphical Processing Unit) computation has become affordable. GPUs were primarily developed for video gaming and similar applications, but are also optimized for exactly the kind of processing that neural networks require. Finally, large image and video datasets are now available. This, more than anything, has motivated and enabled significant progress for both research and industry applications. 

In a recent interview with Inverse, Fast Forward Research Analyst Jessica Graves similarly described how the shifts in the economics of data storage have made big data so important. She explains:

“Storage capabilities leaped over time. If you think in terms of the web, there’s a negligible hardware cost difference per user between storing data about 10 thousand users or 10 million users. I just read a piece about the development of tracking cookies in the ‘90s which made it easier to get a better sense of unique visitor information on websites. It’s cheaper than ever to store information about what consumers are doing on all parts of the web, down to how many seconds a user spent hovering over a candid celebrity photo.”


What’s your take? Why is deep learning just now gaining traction? And what material changes do we hope will occur to render the next big shift in AI possible?

More from the Blog