High-dimensional Sampling in Random Neural Networks Competes With Deep Learning Models of Visual Cortex
Atlas Kazemian, Eric Elmoznino, Michael Bonner, Johns Hopkins University, United States
Session:
Posters 3B Poster
Presentation Time:
Sat, 26 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
The performance of convolutional neural networks (CNNs) as representational models of visual cortex is thought to be associated with their optimization on ethologically relevant tasks. Contrary to this view, we show that a surprisingly simple statistical principle based on high-dimensional sampling of random features is sufficient to induce brain-like representations in neural network models of visual cortex. Specifically, we constructed CNNs that perform random dimensionality expansion and found that fewer than a thousand features are needed to compete with standard supervised networks at predicting the feature-tuning preferences of primate visual cortex, avoiding the need for massive pre-training or task-specific optimization. Furthermore, we found that when random expansion is followed by dimensionality reduction, the dominant modes of variation correspond to brain-relevant dimensions. In fact, random-expansion CNNs remain competitive with standard pre-trained CNNs even when matching their dimensionalities. Remarkably, this means that brain-relevant dimensions are readily discoverable from the statistics of image activations in random convolutional architectures. These findings reveal the unexpected effectiveness of random expansion in neural network models of vision, and they point toward a simplifying statistical theory of cortical visual representation.