How Should an AI Trust its Human Teammates? Exploring Possible Cues of Artificial Trust

Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthiness as the combination of (1) whether someone will do a task and (2) whether they can do it. With building beliefs in trustworthiness as an ultimate goal, we explore which internal factors (krypta) of the human may play a role (e.g., ability, benevolence, and integrity) in determining trustworthiness, according to existing literature. Furthermore, we investigate which observable metrics (manifesta) an agent may take into account as cues for the human teammate’s krypta in an online 2D grid-world experiment (n = 54). Results suggest that cues of ability, benevolence and integrity influence trustworthiness. However, we observed that trustworthiness is mainly influenced by human’s playing strategy and cost-benefit analysis, which deserves further investigation. This is a first step towards building informed beliefs of human trustworthiness in human-AI teamwork.
Original languageEnglish
Article number5
Pages (from-to)1-26
Number of pages26
JournalACM Transactions on Interactive Intelligent Systems
Volume14
Issue number1
Early online date9 Jan 2024
DOIs
Publication statusPublished - Mar 2024
Externally publishedYes

Funding

This material is supported by Delft AI Initiative and by the TAILOR Connectivity Fund. Similarly, it is based upon work supported by the National Science Foundation (NWO) under Grant No. (1136993), and by the European Commission funded project \u201CHumane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us\" (grant 820437). The support is gratefully acknowledged. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of these institutions. Thanks to all colleagues who helped us find the right methods and analysis for this experiment, and of course to those who proofread the manuscript. This material is supported by Delft AI Initiative and by the TAILOR Connectivity Fund. Similarly, it is based upon work supported by the National Science Foundation (NWO) under Grant No. (1136993), and by the European Commission funded project \u201CHumane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us\u201D (grant 820437). The support is gratefully acknowledged. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of these institutions.

FundersFunder number
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
European Commission
TAILOR Connectivity Fund
National Science Foundation1136993
Horizon 2020 Framework Programme820437

    Fingerprint

    Dive into the research topics of 'How Should an AI Trust its Human Teammates? Exploring Possible Cues of Artificial Trust'. Together they form a unique fingerprint.

    Cite this