Multimodal units extract comodulated information
Marcus Ghosh, Sorbonne Universite, France; Gabriel Bena, Nicolas Perez-Nieves, Imperial College London, United Kingdom; Volker Bormuth, Sorbonne Universite, France; Dan F. M. Goodman, Imperial College London, United Kingdom
Session:
Posters 3B Poster
Presentation Time:
Sat, 26 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
We continuously detect sensory data, like sights and sounds, and use this information to guide our behaviour. However, rather than relying on single sensory channels, which are noisy and can be ambiguous alone, we merge information across our senses and leverage this combined signal. In biological networks, this process (multisensory integration) is implemented by multimodal neurons which are thought to receive the information accumulated by unimodal areas, and to fuse this across channels; an algorithm we term accumulate-then-fuse. However, is implementing this algorithm their main function? Here, we explore this question by developing novel multimodal tasks and deploying probabilistic and spiking neural network models. Using these models we demonstrate that multimodal units are not necessary for accuracy or balancing speed/accuracy in classical multimodal tasks, but are critical in a novel set of tasks in which we comodulate signals across channels. We show that these comodulation tasks require multimodal units to implement an alternative fuse-then-accumulate algorithm, and we demonstrate that this algorithm excels in naturalistic settings like predator-prey interactions. Ultimately, our work suggests that multimodal neurons are critical for extracting comodulated information, and provides novel tasks and models for exploring this in biological systems.