How Counterfactual Fairness Modelling in Algorithms Can Promote Ethical Decision-Making

Leander De Schutter, David De Cremer

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Organizational decision-makers often need to make difficult decisions. One popular way today is to improve those decisions by using information and recommendations provided by data-driven algorithms (i.e., AI advisors). Advice is especially important when decisions involve conflicts of interests, such as ethical dilemmas. A defining characteristic of ethical decision-making is that it often involves a thought process of exploring and imagining what would, could, and should happen under alternative conditions (i.e., what-if scenarios). Such imaginative “counterfactual thinking,” however, is not explored by AI advisors - unless they are pre-programmed to do so. Drawing on Fairness Theory, we identify key counterfactual scenarios programmers can incorporate in the code of AI advisors to improve fairness perceptions. We conducted an experimental study to test our predictions, and the results showed that explanations that include counterfactual scenarios were perceived as fairer by recipients. Taken together, we believe that counterfactual modelling will improve ethical decision-making by actively modelling what-if scenarios valued by recipients. We further discuss benefits of counterfactual modelling, such as inspiring decision-makers to engage in counterfactual thinking within their own decision-making process.
Original languageEnglish
Pages (from-to)33-44
Number of pages12
JournalInternational Journal of Human-Computer Interaction
Volume40
Issue number1
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright: © 2023 The Author(s). Published with license by Taylor & Francis Group, LLC.

Fingerprint

Dive into the research topics of 'How Counterfactual Fairness Modelling in Algorithms Can Promote Ethical Decision-Making'. Together they form a unique fingerprint.

Cite this