Illusions of Confidence in Artificial Systems
Clara Colombatto, Steve Fleming, University College London, United Kingdom
Session:
Posters 3B Poster
Presentation Time:
Sat, 26 Aug, 13:00 - 15:00 United Kingdom Time
Abstract:
To effectively communicate and collaborate with others, it is important that we monitor not just others’ cognitive states (e.g., what someone believes), but also their metacognitive states (e.g., how confident they are in that belief). These inferences are central not just in interactions with other humans but also when working with artificial agents. Here we explore how humans infer the confidence of other humans and machines. Participants observed another agent make a series of choices of varying difficulty and later reported how confident they thought the agent was in that choice. Across several experiments, inferences of confidence were sensitive to variables such as task difficulty, observed accuracy and response time. Strikingly, participants inferred higher confidence in the decisions of artificial agents compared to other humans, even though their behaviour was in fact identical. These effects generalised across behavioural profiles, agent descriptions and decision domains. Overall, these results uncover a rich capacity for metacognitive inference and reveal systematic illusions of confidence in machine decisions.