Transfer of abstract structural knowledge aids new concept learning in humans and artificial neural networks
Robert Mok, Danyal Akarca, Alexander Anwyl-Irvine, John Duncan, University of Cambridge, United Kingdom; Bradley Love, University College London, United Kingdom
Session:
Posters 1B Poster
Presentation Time:
Thu, 24 Aug, 17:00 - 19:00 United Kingdom Time
Abstract:
Can humans transfer pure abstract structural knowledge of existing concepts to aid learning new concepts? We tested participants on two concept-learning tasks that either shared (rule-plus-exception structures) or did not share abstract structure (exclusive-or, rule-plus-exception), and the tasks had zero overlap in sensory content (object/room stimuli). The experimental group discovered the abstract structure, speeding up learning in the second task more than controls. We explored if neural networks could provide insight into mechanisms of abstract transfer. We found that three hidden-layer, but not one or two hidden-layer networks, showed transfer by applying representations in the first task to the second task, speeding up learning. Specifically, networks exhibited more similar unit activation patterns (using representational similarity analysis) and greater input-to-hidden-layer node communicability (a graph-theoretic measure) correlation when tasks shared structure, suggesting weight – and therefore knowledge – reuse. Though it is far from obvious that pure structural transfer to new concept learning is possible in neural networks or in humans, our results suggests that this human ability can occur through standard error-driven learning, where neural networks naturally apply previously learned internal representations to aid new learning.