Abstract
Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference “Fair medicine and AI” (online 3–5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation.
Original language | English |
---|---|
Article number | 102658 |
Pages (from-to) | 1-9 |
Number of pages | 9 |
Journal | Artificial Intelligence in Medicine |
Volume | 144 |
Early online date | 4 Sept 2023 |
DOIs | |
Publication status | Published - Oct 2023 |
Bibliographical note
Funding Information:Funding: This work was supported by the Wellcome Trust [grant number 219875/Z/19/Z]; the BMBF [grant number FKZ 01GP1791]; acatech NATIONAL ACADEMY OF SCIENCE AND ENGINEERING and Körber Stiftung; the FWF [project P-32554 “A reference model of explainable Artificial Intelligence for the Medical Domain”]; the United Kingdom Research and Innovation: Trusted Autonomous Systems Programme [grant number EP/V026607/1]. EFV would like to acknowledge that this collaborative paper is part of the Safe and Sound project, a project that has received funding from the European Union's Horizon-ERC program Grant Agreement No. 101076929. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Funding Information:
Funding: This work was supported by the Wellcome Trust [grant number 219875/Z/19/Z ]; the BMBF [grant number FKZ 01GP1791 ]; acatech NATIONAL ACADEMY OF SCIENCE AND ENGINEERING and Körber Stiftung ; the FWF [project P-32554 “A reference model of explainable Artificial Intelligence for the Medical Domain”]; the United Kingdom Research and Innovation: Trusted Autonomous Systems Programme [grant number EP/V026607/1 ]. EFV would like to acknowledge that this collaborative paper is part of the Safe and Sound project, a project that has received funding from the European Union's Horizon-ERC program Grant Agreement No. 101076929 . Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Publisher Copyright:
© 2023
Funding
Funding: This work was supported by the Wellcome Trust [grant number 219875/Z/19/Z]; the BMBF [grant number FKZ 01GP1791]; acatech NATIONAL ACADEMY OF SCIENCE AND ENGINEERING and Körber Stiftung; the FWF [project P-32554 “A reference model of explainable Artificial Intelligence for the Medical Domain”]; the United Kingdom Research and Innovation: Trusted Autonomous Systems Programme [grant number EP/V026607/1]. EFV would like to acknowledge that this collaborative paper is part of the Safe and Sound project, a project that has received funding from the European Union's Horizon-ERC program Grant Agreement No. 101076929. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Funding: This work was supported by the Wellcome Trust [grant number 219875/Z/19/Z ]; the BMBF [grant number FKZ 01GP1791 ]; acatech NATIONAL ACADEMY OF SCIENCE AND ENGINEERING and Körber Stiftung ; the FWF [project P-32554 “A reference model of explainable Artificial Intelligence for the Medical Domain”]; the United Kingdom Research and Innovation: Trusted Autonomous Systems Programme [grant number EP/V026607/1 ]. EFV would like to acknowledge that this collaborative paper is part of the Safe and Sound project, a project that has received funding from the European Union's Horizon-ERC program Grant Agreement No. 101076929 . Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Funders | Funder number |
---|---|
Artificial Intelligence for the Medical Domain | |
European Union's Horizon-ERC | 101076929 |
Trusted Autonomous Systems Programme | EP/V026607/1 |
Wellcome Trust | 219875/Z/19/Z |
Wellcome Trust | |
European Research Council | |
Bundesministerium für Bildung und Forschung | FKZ 01GP1791 |
Bundesministerium für Bildung und Forschung | |
Austrian Science Fund | P-32554 |
Austrian Science Fund | |
Körber-Stiftung |
Keywords
- Bias
- Discrimination
- Health equity
- Inequalities
- Medicine