Machine medical ethics when a human is delusive but the android has its wits about him

Research output: Chapter in Book / Report / Conference proceedingChapterAcademicpeer-review


When androids take care of delusive patients, ethic-epistemic concerns crop up about an agency’s good intent and why we would follow its advice. Robots are not human but may deliver correct medical information, whereas Alzheimer patients are human but may be mistaken. If humanness is not the question, then do we base our trust on truth? True is what logically can be verified given certain principles, which you have to adhere to in the first place. In other words, truth comes full circle. Does it come from empirical validation, then? That is a hard one too because we access the world through our biased sense perceptions and flawed measurement tools. We see what we think we see. Probably, the attribution of ethical qualities comes from pragmatics: If an agency affords delivering the goods, it is a “good” agency. If that happens regularly and in a predictable manner, the agency becomes trustworthy. Computers can be made more predictable than Alzheimer patients and in that sense, may be considered morally “better” than delusive humans. That is, if we ignore the existence of graded liabilities. That is why I developed a responsibility self-test that can be used to navigate the moral mine field of ethical positions that evolves from differently weighing or prioritizing the principles of autonomy, non-maleficence, beneficence, and justice.
Original languageEnglish
Title of host publicationMachine medical ethics
EditorsS.P. van Rysewyk, M.A. Pontier
Place of PublicationBerlin, Heidelberg
ISBN (Print)9783319081083
Publication statusPublished - 2015

Publication series

NameIntelligent Systems, Control and Automation: Science and Engineering


Dive into the research topics of 'Machine medical ethics when a human is delusive but the android has its wits about him'. Together they form a unique fingerprint.

Cite this