Abstract
Robots are becoming part of our social landscape. Social interaction with humans must be efficient and intuitive to understand because nonverbal cues make social interactions between humans and robots more efficient. This study measures mental effort to investigate what factors influence the intuitive understanding of expressive nonverbal robot motions. Fifty participants were asked to watch, while their pupil response and gaze were measured with an eye tracker, eighteen short video clips of three different robot types while performing expressive robot behaviors. Our findings indicate that the appearance of the robot, the viewing angle, and the expression shown by the robot all influence the cognitive load, and therefore, they may influence the intuitive understanding of expressive robot behavior. Furthermore, we found differences in the fixation time for different features of the different robots. With these insights, we identified possible improvement directions for making interactions between humans and robots more efficient and intuitive.
Original language | English |
---|---|
Pages (from-to) | 474-484 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Volume | 16 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Apr 2024 |
Externally published | Yes |
Funding
This work was supported in part by the Predictive and Intuitive Robot Companion (PIRC) project through The Research Council of Norway (RCN) under Grant 312333; in part by the Vulnerability in the Robot Society (VIROS) Project under Grant 288285; and in part by RITMO through its Centre of Excellence scheme under Project 262762.
Funders | Funder number |
---|---|
Robot Society | |
Norges forskningsråd | 312333 |
VIROS | 288285 |
RITMO | 262762 |