Keynote Talks

Catherine Hartley
Catherine Hartley

Developing behavioral flexibility

Throughout our lives, we rapidly acquire knowledge through experience. This knowledge is structured — it reflects regularities in our environments such as sequential relations between events, contingencies between actions and outcomes, and similarities across contexts. Across development, we exploit this structure to support the flexible pursuit of valued outcomes. In this talk, I will present studies examining at the cognitive, neural, and computational levels how the learning, memory, and decision-making processes that support or constrain adaptive behavioral flexibility change over the course of development from childhood to adulthood. I will show that development confers marked changes in the cognitive representations engaged during learning and discuss how these changes may optimize behavior for an individual’s developmental stage.

Helen Barron
Helen Barron

Building cognitive maps during periods of rest and sleep

Every day we make decisions critical for adaptation and survival. We repeat actions with known consequences. But we can also infer associations between loosely related events to infer and imagine the outcome of entirely novel choices. In the first part of the talk I will show that during successful inference, the mammalian brain uses a hippocampal prospective code to forecast temporally structured learned associations. Moreover, during periods of rest, co-activation of hippocampal cells in sharp-wave/ripples represent inferred relationships that include reward, thereby “joining-the-dots” between events that have not been observed together but lead to profitable outcomes. Computing mnemonic links in this manner may provide an important mechanism to infer new relationships. In the second part of the talk I will test this hypothesis and show that in human participants performance on an inference task improves after memory reactivation is facilitated during periods of rest. Finally, I will discuss how neural activity across different brain regions may coordinate during periods of rest and sleep to build a cognitive map that extends beyond direct experience.

James McClelland, Stanford University and Google DeepMind
James McClelland

Capturing Advanced Human Cognitive Abilities in Deep Neural Networks

How can artificial neural networks capture the advanced cognitive abilities of pioneering scientists? In light of the achievements of current large language models, I will argue that these systems are showing some promise, and are quite human like in some ways. That said, they fall short in several respects, and there are clear differences between these systems and biological brains. What innovations will be needed to go further? Beyond better multi-modal grounding and ability to produce actions as well as language, I will argue that artificial networks will benefit from extending efforts that encourage them to exploit human-invented tools of thought and human-like ways of using them. I will also argue that they will benefit from engaging in explicit goal-directed problem solving as exemplified in the activities of scientists and mathematicians and as taught in advanced educational settings. I will end by pointing toward ways of working toward understanding how models more consistent with the properties of the biological neural networks in human brains might someday capture the same advanced human cognitive abilities.

Tim Kietzmann
Tim Kietzmann

Reports from the Neuroconnectionism frontier: topographies and semantics

Originating from the connectionist movement of cognitive science, deep neural networks (DNNs) have had tremendous influence on artificial intelligence, operating at the core of today’s most powerful applications. At the same time, cognitive computational neuroscientists have recognised their promise to act as “Goldilocks” models of brain function: DNNs are grounded in sensory data, can be trained to perform complex tasks in a distributed fashion, are fully configurable/accessible to the experimenter, and can be mapped to brain function across various levels of explanation. This has led to a fruitful research cycle in which biological aspects are integrated into network design, and the corresponding networks are then tested for their ability to predict neural and behavioural data. This talk will present this emerging approach, which we call neuroconnectionism, as a cohesive large-scale research programme centered around ANNs as a computational language for expressing falsifiable theories about brain computation. I will describe the core of the programme as well as the underlying rationale and tools, before focusing on two recent streams of investigation that my lab is involved in. First, I will discuss a collaborative effort that uses linguistic deep neural network embeddings as models for visual processing. We show, based on recurrent neural network models, that transforming visual inputs into semantic scene descriptions may be a defining characteristic of the visual system. Second, I will describe our developments of end-to-end topographic neural networks that outperform convolutional architectures as models of both, cortical map formation and spatial biases in human visual behaviour. Together with the many exciting developments by the community, these results indicate that the neuroconnectionism research programme is highly progressive, generating new and otherwise unreachable insights into the inner workings of the brain.

Leslie Pack Kaelbling
Leslie Pack Kaelbling

Doing for our robots what nature did for us

We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about strategies for robot software design that combine machine learning with insights from natural intelligence and from classical engineering design. I will describe several research projects, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Stanislas Dehaene
Stanislas Dehaene

Understanding the neural code for human symbols and languages: A challenge for cognitive neuroscience

Cognition is distinctly different in humans compared to other animals. A unique feature of our species is natural language, but in this talk, I will argue that a competence for symbols and languages drives many other cognitive domains, such as our unique abilities for geometry, mathematics, or music. Even the mere perception of a square or a zig-zag is driven by minimal description length (MDL) and thus involves a search for the shortest “mental program” that captures the observed data in an internal “language of geometry”. Behavioral and brain-imaging experiments indicate that the perception of geometric shapes is poorly captured by current convolutional neural network models of the ventral visual pathway, but involves a symbolic geometrical description within the dorsal parieto-prefrontal network. I will argue that existing connectionist models do not suffice to account for even elementary human perceptual data, and that neural codes for symbols and syntax remain to be discovered.