The resurrection of neural networks as a technique has helped propel the field of machine learning to the forefront of commercial applications. Today’s most popular applications focus on finding patterns in data and exploiting those patterns for very narrow tasks. But what if we want more from machine learning? Instead of trying to contort the methods we have today to achieve marginal gains in generalizability, what if we took another angle altogether?
Earlier this year, MIT launched MIT Intelligence Quest, a large multidisciplinary group of researchers and engineers with two branches, “The Core” and “The Bridge.” The Core seeks to advance our understanding of intelligence in both humans and machines (and hopes the work from each area can inform the other). The Bridge provides a clear path toward applicability for industry and other academic labs inside and outside of MIT’s campus. The research community involved in MIT Quest does not just develop new algorithms or build tools to implement those algorithms; the group also researches the social, cultural, and ethical implications of AI research.
Some projects already coming out of The Quest are “moonshot” ideas that have the potential to really change the world. One such “moonshot” is led by work coming out of Josh Tenenbaum’s lab, which hopes to build models that exhibit infant-like capabilities to learn, using things like “intuitive physics” and “intuitive psychology.” These models are built using bayesian programs that utilize one-shot learning before generalizing. Tenenbaum’s moonshot project is just one of many that are being worked on as part of The Quest.
Moonshot (pun intended) Photo by Mike Petrucci on Unsplash
We’re excited to see where this research leads and are pleased to see the announcement this week that MIT received an unprecedented $1 billion to create the Stephen A. Schwarzman College of Computing. This new college will likely help fund many more AI moonshot projects to come!
More from the Blog
Oct 29 2018
by — Last week, Sir Tim Berners-Lee announced Solid, a project designed to give users more control over their data. Solid is one of a number of recent attempts to rethink how the web works. As part of an effort to get my head around the goals of these different approaches and, more concretely, what they actually do, I made some notes on what I see as the most interesting approaches. Beaker browse...
Oct 29 2018
Federated Learning is a technology that allows you to build machine learning systems when your datacenter can’t get direct access to model training data. The data remains in its original location, which helps to ensure privacy and reduces communication costs. Privacy and reduced communication makes federated learning a great fit for smartphones and edge hardware, healthcare and other privacy-s...
Dec 6 2018
by — Our prototypes are designed to demonstrate the value of the technologies we research. For our most recent prototype, Turbofan Tycoon, we decided that the best way to demonstrate the value of federated learning was to place you in an interactive simulation where you’re in charge of maintaining four turbofan engines. In this post, I’m going to try and explain a bit about why we decided that, an...