CT-2.6

Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamic Mode Representational Similarity Analysis

Mitchell Ostrow, Adam Eisen, Leo Kozachkov, Ila Fiete, Massachusetts Institute of Technology, United States

Session:
Contributed Talks 2 Lecture

Track:
Cognitive science

Location:
South Schools / East Schools

Presentation Time:
Sat, 26 Aug, 17:45 - 18:00 United Kingdom Time

Abstract:
How can we tell if two neural networks are performing the same computations? This question has grown increasingly important as the capabilities of artificial neural network models and experimental methods for recording neural data have improved. Thus far, most attempts to answer this question have emphasized how neural networks, both artificial and biological, process information in terms of the spatial geometry of their neural activity. However, this approach does not account for how information is transformed over time, which is essential for understanding biological circuits. To address this issue, we developed a data-driven method called Dynamic Mode Representational Similarity Analysis (DMRSA). DMRSA utilizes a high-dimensional embedding to identify spatiotemporally coherent features of two nonlinear dynamical systems. These features, referred to as Koopman Modes, describe core dynamic patterns. Subsequently, a statistical shape analysis is used to compare these modes, thereby assessing the similarity between the systems' dynamics. Our results demonstrate that DMRSA effectively identifies the dynamic structure of neural computations, whereas standard geometric methods fall short. DMRSA therefore has significant relevance for neuroscience, where knowledge of the underlying dynamics can only be inferred through measurements. Consequently, DMRSA opens the door to novel data-driven analyses of the temporal structure of neural computation.

Manuscript:
License:
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI:
10.32470/CCN.2023.1356-0
Publication:
2023 Conference on Cognitive Computational Neuroscience
Presentation
Discussion
Resources
No resources available.
Session CT-2
CT-2.1: Mental Imagery: Weak Vision or Compressed Vision?
Tiasha Saha Roy, Jesse Breedlove, Ghislain St-Yves, Kendrick Kay, Thomas Naselaris, University of Minnesota, United States
CT-2.2: Leveraging Artificial Neural Networks to Enhance Diagnostic Efficiency in Autism Spectrum Disorder: A Study on Facial Emotion Recognition
Kushin Mukherjee, University of Wisconsin-Madison, United States; Na Yeon Kim, California Institute of Technology, United States; Shirin Taghian Alamooti, York University, Canada; Ralph Adolphs, California Institite of Technology, United States; Kohitij Kar, York University, Canada
CT-2.3: Dropout as a tool for understanding information distribution in human and machine visual systems
Jacob S. Prince, Harvard University, United States; Gabriel Fajardo, Boston College, United States; George A. Alvarez, Talia Konkle, Harvard University, United States
CT-2.4: Humans and 3D neural field models make similar 3D shape judgements
Thomas O'Connell, MIT, United States; Tyler Bonnen, Stanford University, United States; Yoni Friedman, Ayush Tewari, Josh Tenenbaum, Vincent Sitzmann, Nancy Kanwisher, MIT, United States
CT-2.5: Humans and CNNs see differently: Action affordances are represented in scene-selective visual cortex but not CNNs
Clemens G. Bartnik, Iris I.A. Groen, University of Amsterdam, Netherlands
CT-2.6: Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamic Mode Representational Similarity Analysis
Mitchell Ostrow, Adam Eisen, Leo Kozachkov, Ila Fiete, Massachusetts Institute of Technology, United States