Mobile Service Robot State Revealing Through Expressive Lights: Formalism, Design, and Evaluation

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

© 2017, Springer Science+Business Media B.V.We consider mobile service robots that carry out tasks with, for, and around humans in their environments. Speech combined with on-screen display are common mechanisms for autonomous robots to communicate with humans, but such communication modalities may fail for mobile robots due to spatio-temporal limitations. To enable a better human understanding of the robot given its mobility and autonomous task performance, we introduce the use of lights to reveal the dynamic robot state. We contribute expressive lights as a primary modality for the robot to communicate to humans useful robot state information. Such lights are persistent, non-invasive, and visible at a distance, unlike other existing modalities. Current programmable light arrays provide a very large animation space, which we address by introducing a finite set of parametrized signal shapes while still maintaining the needed animation design flexibility. We present a formalism for light animation control and an architecture to map the representation of robot state to the parametrized light animation space. The mapping generalizes to multiple light strips and even other expression modalities. We demonstrate our approach on CoBot, a mobile multi-floor service robot, and evaluate its validity through several user studies. Our results show that carefully designed expressive lights on a mobile robot help humans better understand robot states and actions and can have a desirable impact on a collaborative human–robot behavior.
Original languageEnglish
Pages (from-to)65-92
JournalInternational Journal of Social Robotics
Volume10
Issue number1
DOIs
Publication statusPublished - 1 Jan 2018
Externally publishedYes

Funding

This research was partially supported by the FCT INSIDE ERI grant, FLT Grant Number 2015-143894, NSF Grant Number IIS-1012733, and ONR Grant N00014-09-1-1031. The views and conclusions contained in this document are those of the authors only. The authors declare that they have no conflict of interest. We would like to thank Ana Paiva and Stephanie Rosenthal for their guidance on the user studies, as well Joydeep Biswas and Richard Wang for their development and maintenance of the autonomous CoBot robots. Acknowledgements This research was partially supported by the FCT INSIDE ERI grant, FLT Grant Number 2015-143894, NSF Grant Number IIS-1012733, and ONR Grant N00014-09-1-1031. The views and conclusions contained in this document are those of the authors only. The authors declare that they have no conflict of interest. We would like to thank Ana Paiva and Stephanie Rosenthal for their guidance on the user studies, as well Joydeep Biswas and Richard Wang for their development and maintenance of the autonomous CoBot robots.

FundersFunder number
National Science FoundationIIS-1012733
Office of Naval ResearchN00014-09-1-1031
Fundação para a Ciência e a Tecnologia2015-143894
Fundació Catalana de Trasplantament

    Fingerprint

    Dive into the research topics of 'Mobile Service Robot State Revealing Through Expressive Lights: Formalism, Design, and Evaluation'. Together they form a unique fingerprint.

    Cite this