Dropout as a tool for understanding information distribution in human and machine visual systems
Jacob S. Prince, Harvard University, United States; Gabriel Fajardo, Boston College, United States; George A. Alvarez, Talia Konkle, Harvard University, United States
Session:
Contributed Talks 2 Lecture
Location:
South Schools / East Schools
Presentation Time:
Sat, 26 Aug, 17:00 - 17:15 United Kingdom Time
Abstract:
Deep neural networks are useful for operationalizing high-level visual representation spaces that are governed by specific architectural, input, and learning constraints. An underexplored but highly relevant pressure on representation formation is the mode of regularization applied during training. Here, we train a set of models with parametrically varying dropout proportion (p) to induce systematically varying degrees of distributed information while controlling all other inductive biases. We find that increasing dropout produces an increasingly smooth, low-dimensional representational space. Optimal robustness to lesioning is observed at around 70% dropout, after which both accuracy and robustness decline. Representational comparison to data from occipitotemporal cortex in the Natural Scenes Dataset reveals that this optimal degree of dropout is also associated with maximal emergent neural predictivity. These results suggest that varying dropout may reveal an optimal point of balance between the efficiency of high-dimensional codes and the robustness of low dimensional codes in hierarchical vision systems.