Modeling and Validation of Biased Human Trust

M. Hoogendoorn, S.W. Jaffry, P. van Maanen, J. Treur

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that humans often exhibit non-rational, biased behavior with respect to trust. This paper reports how some variations of biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better. © 2011 IEEE.
Original languageEnglish
Title of host publicationProceedings of the 11th IEEE/WIC/ACM International Conference on Intelligent Agent Technology
Editors Boissier, O., et al.
PublisherIEEE Computer Society Press
Pages256-263
DOIs
Publication statusPublished - 2011
EventIEEE/WIC/ACM International Conference on Intelligent Agent Technology -
Duration: 22 Jul 201127 Jul 2011

Conference

ConferenceIEEE/WIC/ACM International Conference on Intelligent Agent Technology
Period22/07/1127/07/11

Fingerprint Dive into the research topics of 'Modeling and Validation of Biased Human Trust'. Together they form a unique fingerprint.

Cite this