CT-4.2

Teasing apart the representational spaces of ANN language models to discover key axes of model-to-brain alignment

Eghbal Hosseini, Noga Zaslavsky, Colton Casto, Evelina Fedorenko, Massachusetts Institute of Technology, United States

Session:
Contributed Talks 4 Lecture

Track:
Cognitive science

Location:
South Schools / East Schools

Presentation Time:
Sun, 27 Aug, 13:45 - 14:00 United Kingdom Time

Abstract:
A central goal of neuroscience is to uncover neural representations that underlie sensorimotor and cognitive processes. Artificial neural networks (ANN) can provide hypotheses about the nature of neural representations. However, in the domain of language, multiple ANN models provide a good match to human neural responses. To dissociate these models, we devised an optimization procedure to select stimuli for which model representations are maximally distinct. Surprisingly, we found that all models struggle to predict brain responses (fMRI) to such stimuli. We further a) confirmed that these sentences are not outliers in terms of linguistic properties and that neural responses to these sentences are as reliable as to random sentences, and b) replicated this finding in another, previously collected, dataset. Stimuli for which model representations differ can be used to uncover dimensions of ANN-to-brain alignment, and serve to build more brain-like computational models of language.

Manuscript:
License:
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI:
10.32470/CCN.2023.1547-0
Publication:
2023 Conference on Cognitive Computational Neuroscience
Presentation
Discussion
Resources
No resources available.