CT-4.4

Hyper-HMM: simultaneous temporal and spatial pattern alignment for brains and stimuli

Caroline Lee, Columbia University, Dartmouth College, United States; Jane Han, Ma Feilong, Guo Jiahui, James Haxby, Dartmouth College, United States; Christopher Baldassano, Columbia University, United States

Session:
Contributed Talks 4 Lecture

Track:
Cognitive science

Location:
South Schools / East Schools

Presentation Time:
Sun, 27 Aug, 14:15 - 14:30 United Kingdom Time

Abstract:
Naturalistic stimuli evoke complex neural responses that can differ across people in their spatial and temporal properties. Current alignment methods focus on either spatial hyperalignment (assuming exact temporal correspondence) or temporal alignment into corresponding events using a Hidden Markov Model (assuming exact spatial correspondence). Here, we propose a hybrid model, the Hyper-HMM, that simultaneously aligns both temporal and spatial features across brains. The model learns to linearly project from voxels to a reduced-dimension latent space, in which timecourses are segmented into corresponding temporal events. This approach allows tracking of each individual's mental trajectory through an event sequence, and also allows for alignment with other feature spaces such as stimulus content. Using an fMRI dataset of angular gyrus responses to lecture videos, we demonstrate that the Hyper-HMM can be used to map all participants and the semantic content of the videos into a common low-dimensional space, and that these mappings generalize to a held-out video.

Manuscript:
License:
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI:
10.32470/CCN.2023.1355-0
Publication:
2023 Conference on Cognitive Computational Neuroscience
Presentation
Discussion
Resources
No resources available.