Understanding Learning Trajectories With Infinite Hidden Markov Models
Sebastian Bruijns, Max-Planck-Institute Tübingen, Germany; International Brain Laboratory, International Brain Laboratory, Germany; Peter Dayan, Max-Planck-Institute Tübingen, Germany
Session:
Posters 2B Poster
Presentation Time:
Fri, 25 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
Learning the contingencies of a new experiment is not an easy task for animals. Individuals learn in an idiosyncratic manner, revising their strategies multiple times as they are shaped, or shape themselves. Long-run learning is therefore a tantalizing target for the sort of quantitatively individualized characterization that sophisticated modelling can provide. However, any such model requires a highly flexible and extensible structure which can capture radically new behaviours as well as slow adaptations in existing ones. Here, we suggest a dynamic input-output infinite hidden Markov model whose latent states are associated with specific, slowly-adapting, behavioural patterns. This model includes a countably infinite number of potential states and so can describe new behaviour by introducing additional states, while the dynamics in the model allow it to capture adaptations to existing behaviours. We fit this model to the choices of mice as they take around 10,000 trials each, across multiple sessions, to learn a contrast detection task. We identify three types of behavioural states which demarcate essential steps in the learning of our task for virtually all mice. Our approach provides in-depth insight into the process of animal learning and offers potentially valuable predictors for analyzing neural data.