Connecting hippocampal representations to predictive auxiliary tasks in deep RL
Ching Fang, Columbia University, United States; Kimberly Stachenfeld, DeepMind, United States
Session:
Posters 1B Poster
Presentation Time:
Thu, 24 Aug, 17:00 - 19:00 United Kingdom Time
Abstract:
The ability to predict upcoming events has been hypothesized to comprise a key aspect of human and animal cognition. Hippocampus is thought to be involved in this function, supporting memory-guided behaviors like navigation and model-based planning wherein past experiences are translated into future plans. However, little is known about how this hypothesized role in prediction constrains hippocampal inputs, and what the influence of interconnected regions performing competing tasks (like dopaminergic learning) might have on these populations. Interestingly, in deep reinforcement learning (RL), predictive auxiliary tasks have been found to improve task performance by improving the representational features learned in a deep RL model. We use this framework to study how regions with different computational objectives can jointly influence representation learning. We find that prediction, as an auxiliary representational objective, can benefit representation learning in tasks with sparse rewards, particularly in transfer learning settings. We also show how splitter cell phenomena can arise from this model. This work recalls classic theories of hippocampus as a a system that records features of an episode that are not immediately relevant to the computation of reward but may later have information whose relevance will later be determined and consolidated.