Feedforward Neural Networks can capture Human-like Perceptual and Behavioral Signatures of Contour Integration
Fenil R. Doshi, Talia Konkle, George A. Alvarez, Harvard University, United States
Session:
Posters 1B Poster
Presentation Time:
Thu, 24 Aug, 17:00 - 19:00 United Kingdom Time
Abstract:
Contour integration is the process of linking local edge elements to arrive at a unified perceptual representation of a complete contour, and may thus serve as a critical pre-cursor representation needed to extract global shape information supporting object recognition. Many mechanisms have been proposed for such a feature-linking process (Field, Hayes & Hess, 1993; Kellman & Shipley, 1991), including long-range lateral interactions (Bosking et al., 1997), temporally synchronized cortical oscillations (Engel, Konig & Singer, 1991), and top-down feedback connections (Kim et al., 2019). In this study, we test the alternative possibility that feed-forward, nonlinear convolutional neural networks are able to perform contour integration without lateral connections, recurrence, or top-down feedback. We find that such a feedforward system exhibits sensitivities to global and local contour curvatures comparable to humans, but it requires two critical inductive biases to do so - visual experience of relatively straight-looking smooth contours and an architectural constraint of increasing receptive field size. Through this approach, we provide computational support for the hypothesis that a hierarchical feedforward visual processor can develop and leverage Gestalt-like laws of "good continuation" to detect extended contours in a manner consistent with human perception.