Stable, individuating patterns of visual attention in abstract conceptual feature space revealed using natural language model
Amanda J. Haskins, Katherine O. Packard, Caroline E. Robertson, Dartmouth College, United States
Session:
Posters 1B Poster
Presentation Time:
Thu, 24 Aug, 17:00 - 19:00 United Kingdom Time
Abstract:
No two individuals’ gaze patterns during naturalistic viewing are identical: we each prioritize attending to different features when exploring a real-world environment. Yet, a key question remains unanswered: What representational space structures these individual differences? We hypothesized that conceptual-level features (e.g., “flirting”, “for sale”), as compared to object/categorical-level features (e.g., “face”, “chair”) explain gaze differences, and that conceptual priorities are trait-, rather than state-like. Here, we developed a novel approach for contrasting abstract conceptual information and object/categorical information present in real-world scenes using a novel eyetracking analysis and computational language and vision models. We measured participants’ naturalistic attention while they actively explored real-world environments in virtual reality (N = 62; N = 29 repeat participants). For each participant, we modeled the relationship between their attentional patterns and two feature spaces: 1) an abstract conceptual space and 2) a visual categorical space. In brief, we find evidence for stable, individuating patterns of attention (i.e., “attentional fingerprints”) in the conceptual feature space. Critically, however, gaze patterns do not simply reflect object/categorical-level priorities, as gaze patterns in the feature space of a vision model cannot be used to individuate participants.