A high-dimensional view of computational neuroscience

Keynote Abstract

Brains and artificial neural networks represent information in population codes, defined by the activity patterns of many neurons. Understanding the statistical principles that govern these population codes is critical to the paradigms of cognitive computational neuroscience. In this Keynote, we will present converging lines of work from neuroscience and machine learning that reveal the statistical underpinnings of neural populations from the perspective of their principal components. We will show that the population codes of visual cortex and neural network models of vision have surprisingly high-dimensional latent structure, which contrasts with the idea that vision compresses high-dimensional sensory inputs down to low-dimensional representations. We will also demonstrate how the latent dimensions of neural networks explain their learned representations better than the tuning properties of single neurons, which has implications for how we might illuminate the black box of deep networks. The statistical framework presented here identifies general principles of neural representation that abstract over the lower-level details of biological and artificial systems, including the details of network architectures and task objectives that are often emphasized in deep learning approaches in neuroscience. Together, this work points toward a simplifying statistical theory of sensory representation in neural populations.

Tutorial Plan

In this tutorial, participants will learn key concepts and methods for understanding the latent statistical structure of high-dimensional representations. We will begin with an introduction to the statistical analysis of latent subspaces in high-dimensional systems with a focus on linear methods, including principal component analysis (PCA), singular value decomposition (SVD), partial least squares (PLS), and Procrustes analysis. We will discuss the underlying relationships among these methods and connect them to intuitive geometric interpretations. As we will show, computational neuroscientists cannot simply rely on standard software packages for analyzing latent subspaces, because neuroscience datasets contain a mix of meaningful signal and random noise, which need to be separated. To address this, we will cover specific considerations and extensions of these methods in computational neuroscience, including cross-validation and cross-decomposition methods for identifying shared subspaces across datasets. Participants will also learn how to investigate artificial neural networks by examining the statistical structure of weight matrices. We will show how this approach can be used to uncover signatures of learning in neural networks, to probe a network’s representations, and to assess the similarities among different networks.

Throughout the tutorial, we will lead a series of hands-on exercises in which these statistical methods are applied to fMRI data and artificial neural networks. The tools and methods however are general-purpose and can be applied to a wide range of datasets in computational neuroscience. Participants will learn how to use software tools in Python to investigate the latent statistical signals in high-dimensional data, how to compute key summary statistics for latent subspaces, and how to plot and visualize the underlying structure of high-dimensional representations. This tutorial is designed to be accessible to researchers with a basic familiarity with Python and some experience with the basics of representational modeling, such as representational similarity analysis and linear regression. All mathematical concepts in this tutorial will be covered at a high level with the aim of making the tutorial accessible to participants who may be unfamiliar with these concepts. By the end of the tutorial, participants will be prepared to apply these general-purpose statistical methods to their own datasets.

Download Tutorial Content

Keynote Speakers

Michael F. Bonner

Michael F. Bonner

Johns Hopkins University, Department of Cognitive Science

Florentin Guth

Florentin Guth

Ecole Normale Supérieure, Département d’Informatique

Tutorial Leaders

Atlas Kazemian

Atlas Kazemian

Johns Hopkins University, Department of Cognitive Science

Raj Magesh Gauthaman

Raj Magesh Gauthaman

Johns Hopkins University, Department of Cognitive Science

Zirui Chen

Zirui Chen

Johns Hopkins University, Department of Cognitive Science