Feature-disentangled reconstruction of perception from multi-unit recordings
Thirza Dado, Radboud University, Netherlands; Paolo Papale, Antonio Lozano, Netherlands Institute for Neuroscience, Netherlands; Lynn Le, Marcel van Gerven, Radboud University, Netherlands; Pieter Roelfsema, Netherlands Institute for Neuroscience, Netherlands; Yağmur Güçlütürk, Umut Güçlü, Radboud University, Netherlands
Session:
Posters 2B Poster
Presentation Time:
Fri, 25 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
Here, we aimed to explain neural representations of perception, for which we analyzed the relationship between multi-unit activity (MUA) recorded from the primate brain and various feature representations of visual stimuli. Our encoding analysis revealed that the $w$-latent representations of feature-disentangled generative adversarial networks (GANs) were the most effective candidate for predicting neural responses to images. Importantly, the usage of synthesized yet photorealistic images allowed for superior control over these data as their underlying latent representations were known a priori rather than approximated post-hoc. As such, we leveraged this property in neural reconstruction of the perceived images. Taken together with the fact that the (unsupervised) generative models themselves were never optimized on neural data, these results highlight the importance of feature disentanglement and unsupervised training as driving factors in shaping neural representations.