Pioneers in artificial intelligence (AI) have worked across multiple related fields, including computer science, AI, neuroscience, and psychology - but as each of these areas of research have grown in complexity and disciplinary boundaries have solidified, collaboration has become less commonplace. In Neuroscience-Inspired Artificial Intelligence, the co-founder of Google DeepMind Demis Hassabis, alongside other renowned neuroscientists, argues to revive collaborative efforts.
The (human) brain is a living case-in-point that human-level general AI is possible, but building it is a daunting task. The search space is vast and sparsely populated; biological intelligence provides a guide. Neuroscience can validate AI techniques that exist already: if known algorithms are found to be implemented in the brain, they are likely an integral component of general intelligence systems.
Neuroscience also provides a rich source of inspiration for new types of algorithms and architectures; a set of recent papers (Stachenfeld et al., Constantinescou at al.) suggests there are types of data representations sufficiently flexible and abstract as to support the remarkable human capacity of generalizing experiences to novel situations — a tough nut many AI researchers are looking to crack (i.e., transfer / one- or zero-shot learning) — and that a mechanism for constructing these (abstract) representations from sensory experience exists.
A Nobel-prized story of the hippocampus
It’s Nobel season, and in 2014, Edvard and May-Britt Moser, alongside John O’Keefe, were awarded the Nobel Prize in Physiology or Medicine for their discovery of a set of cells in the hippocampus (a brain structure deep inside the mammalian brain) thought to help us orient and navigate in space. Drivers of black-cabs in London, required to memorize some 25,000 streets and thousands of landmarks, for example, have a larger than usual hippocampus; their brains have adapted to the unique demands of their jobs.
Stachenfeld and colleagues show that the hippocampus does more than encode locations in space. Instead, it encodes “successor representations,” information about likely future locations given your current location.
Successor representations in decision making
Think about how you choose your route to work (or the next move in a game of chess or Go). You need to estimate the likely future reward of your decision in order to make a smart decision now. This is tricky, because the number of possible scenarios increases exponentially, the further you peek into the future. AlphaGo Zero, the Go playing champ built by Google DeepMind, uses advanced tree search (Monte Carlo tree search) to simulate the future in order to make smart decisions in the now.
Rats - capable of strategic, reward-maximizing decisions - are unlikely to use such computationally expensive methods. Successor representations offer a computationally less expensive yet flexible mechanism. They are a kind of look-up table that contains information about likely future states (e.g., locations) given the current state (i.e., where you will be, given where you are now). Combined with information about (future) reward, successor representations enable reward-maximizing decisions without expensive simulation. They also enable quick adaptation to changes in reward (a novel food source, for example) - while adaptations to changes in space (e.g., a new obstacle) will be slower.
Stachenfeld and colleagues offer empirical evidence for the existence of successor representations in the rat’s hippocampus and for the existence of a low-dimensional decomposition of successor representations in the entorhinal cortex (the main interface between the hippocampus and neocortex). The authors show that these low-dimensional decompositions of the successor representations lend themselves to the discovery of subgoals, a hallmark of efficient planning and the foundation for hierarchical, increasingly abstract representations of tasks required for the generalization of knowledge to novel scenarios.
Comparing model predictions (B) to reality (A) (i.e., the firing rates of cells recorded in the hippocampus of a rat). As the rat is trained to run in a preferred direction along a narrow track, initially symmetric place cells (red) begin to skew (blue) predicted in theory (B) and demonstrated in practice (A).
From rats to humans, from spatial navigation to abstract reasoning
This isn’t isolated to rats; humans also use these decompositions of successor representations during strategic planning and decision making, as Constantinescou and colleagues show. What’s more, successor representations and their decompositions are used not only during spatial navigation, but also during abstract reasoning; abstract reasoning capabilities piggyback on representations evolved for spatial reasoning tasks.
Taken together, successor representations and their decompositions provide us with a clue as to how the brain computes abstract representations from sensory inputs that allow us (human and non-human animals) to generalize our experiences to novel situations, thus showing that the collaboration between neuroscience, psychology, and AI could be a very fruitful one indeed.
More from the Blog
Oct 26 2017
Pirating a copyrighted song, video, or e-book to listen to the song, watch the movie, or read the book is an infringement of copyright (which can be severely fined). So how about pirating a song, video, or e-book to train machine learning models? NYU Teaching and Research Fellow Amanda Levendowski proposes a legal approach to reducing bias in machine learning models. Biased data leads to biase...
Nov 2 2017
by — We believe in fiction as an important tool for imagining future relationships to technology. In our reports on new technologies we feature short science fiction stories that imagine the possible implications. The following story, influenced by a certain classic Sci-Fi film, appeared in our Interpretability report. For more on interpretability read a video conversation on interpretability, a gui...
Jan 29 2019
by — UMAP explorer: an interactive visualization of the MNIST data set We’re in the middle of work on our next report, Learning with Limited Labeled Data, and the accompanying prototype. For the prototype’s front-end we wanted to be able visualize and explore the embedding of a large image data set. Once you get into the tens of thousands of points, this can be a challenge to do in the browser. T...