Robot ethical training with dynamic ethical preference logic

Shuai Wang*

*Corresponding author for this work

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Rarely has ethical training of robots been formally studied. We address this need by introducing Dynamic Ethical Preference Logic (DEPL) as an extension of Deontic Logic. We present ethical training as model updating with respect to some contrary-to-duty (CTD) obligations.

Original languageEnglish
Title of host publicationAdvances in Cooperative Robotics: Proceedings of the 19th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, CLAWAR 2016
Subtitle of host publicationProceedings of the 19th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, CLAWAR 2016
PublisherWorld Scientific Publishing Co. Pte Ltd
Pages611-618
Number of pages8
ISBN (Print)9789813149120
Publication statusPublished - 1 Jan 2016
Externally publishedYes
Event19th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, CLAWAR 2016 - London, United Kingdom
Duration: 12 Sep 201614 Sep 2016

Publication series

NameAdvances in Cooperative Robotics: Proceedings of the 19th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, CLAWAR 2016

Conference

Conference19th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, CLAWAR 2016
CountryUnited Kingdom
CityLondon
Period12/09/1614/09/16

Keywords

  • Deontic logic
  • Game semantics
  • Model checking
  • Robot ethics

Fingerprint

Dive into the research topics of 'Robot ethical training with dynamic ethical preference logic'. Together they form a unique fingerprint.

Cite this