If artificial entities as autonomous robots will be sentient beings, will it be necessary to give robots and AI entities some legal capacity comparable with legal personhood in a society that will be interacting with robotics and AI appliances? Must they have an understanding of legal consequences of their actions? In this chapter, this question is considered by analyzing the future capacities and functions of robots and AI systems and the rights and duties of existing legal subjects, natural persons, and (artificial) legal persons such as corporations and states. The question is posed if AI will have a capacity to be sentient as natural persons and—maybe— other living beings or will AI always be comparable with the subject in the Chinese room experiment? Therefore the relevance of free will, intelligence, and consciousness of natural persons to acquire legal personhood are analyzed and compared with other beings, animals, and future sentient AI entities. The hesitance to give legal personhood to AI is also influenced by the human conviction that this would increase the risk to lose control and a “robot uprising.” Man, as always, is afraid of technology getting out of hand and is convinced of their own superiority and therefore always wants to stay in control. Question is if there always has to be a natural person in the loop. In that light the need for a certain legal personhood in a future legal framework, considering civil liability and even criminal liability, is discussed as it is also subjected to considerations as proposed by a resolution of the European Parliament, eventually leading to proposals in European policy and law.
|Title of host publication||Artificial Intelligence in Medical Imaging|
|Subtitle of host publication||Opportunities, Applications and Risks|
|Editors||Erik R. Ranschaert, Sergey Mozorov, Paul R. Algra|
|Publisher||Springer International Publishing AG|
|Number of pages||34|
|Publication status||Published - 2019|