Latent space decomposition supports efficient less-than-one-shot learning
Ilia Sucholutsky, Thomas L. Griffiths, Princeton University, United States
Session:
Posters 2B Poster
Presentation Time:
Fri, 25 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
Recent evidence suggests that humans might use soft labels, which describe how an object relates to every category, to learn categories from less than one example per class. Previous evidence suggests that humans learn specialized features or categories for recognizing stimuli like faces and written words. But if all visual stimuli were always represented using dense soft labels in a shared space, then we would need to store values of every specialized category for every stimulus, even if they were irrelevant for recognizing that stimulus. So how can people remain highly efficient when representing concepts that require different specialized features in a shared space? We propose that decomposing the latent space into lower-dimensional subspaces corresponding to clusters of categories can greatly reduce the number of parameters required to learn or represent a set of categories. Our theoretical and simulation results with this type of latent space decomposition provide a way to resolve this tension between efficiency and specialization.