Governance by glass-box: Implementing transparent moral bounds for AI behaviour

Andrea Aler Tubella, Andreas Theodorou, Frank Dignum, Virginia Dignum

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people's lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirement in this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains. In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a 'Glass-Box' around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems. The explicit transformation of abstract moral values into concrete norms brings great benefits in terms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.
Original languageEnglish
Title of host publicationProceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
EditorsS. Kraus
PublisherInternational Joint Conferences on Artificial Intelligence
Pages5787-5793
ISBN (Electronic)9780999241141
DOIs
Publication statusPublished - 2019
Externally publishedYes
Event28th International Joint Conference on Artificial Intelligence, IJCAI 2019 - Macao, China
Duration: 10 Aug 201916 Aug 2019

Publication series

NameIJCAI International Joint Conference on Artificial Intelligence
ISSN (Print)1045-0823

Conference

Conference28th International Joint Conference on Artificial Intelligence, IJCAI 2019
Country/TerritoryChina
CityMacao
Period10/08/1916/08/19

Funding

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by the European Union's Horizon 2020 research and innovation pro-gramme under grant agreement No 825619. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825619. 2These include but are not limited to: the United Nations, the European Union, UK’s Engineering and Physical Sciences Research Council, ACM, and IEEE Standards Association.

FundersFunder number
Autonomous Systems and Software Program
European Union's Horizon 2020 research and innovation pro-gramme
European Union’s Horizon 2020
IEEE Standards Association
WASP
Wallenberg AIAI
United Nations
Anacostia Community Museum
Horizon 2020 Framework Programme952026, 825619
Engineering and Physical Sciences Research Council
Knut och Alice Wallenbergs Stiftelse

    Fingerprint

    Dive into the research topics of 'Governance by glass-box: Implementing transparent moral bounds for AI behaviour'. Together they form a unique fingerprint.

    Cite this