Dimensions That Matter – Interpretable Object Dimensions in Humans and Deep Neural Networks
Florian Mahner, Max Planck Institute for Human Cognitive and Brain Sciences, Germany; Lukas Muttenthaler, TU Berlin, Germany; Umut Güçlü, Donders Institute for Brain, Cognition and Behaviour, Netherlands; Martin Hebart, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
Session:
Posters 2B Poster
Presentation Time:
Fri, 25 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
How do mind and machines represent objects? This question has sparked continued interest in the connected fields of cognitive neuroscience and artificial intelligence. Here we address this question by introducing a novel approach that allows us to compare human and deep neural network (DNN) representations through an interpretable embedding. We achieve this by treating the DNN as an in-silico human observer and asking it to rate the similarities between objects in a triplet task. We find that (i) DNN representations capture meaningful object properties, (ii) demonstrate with multiple in-silico tests that the DNN contains conceptual and perceptual representations including shape, and (iii) identify similarities and differences in their representational content.