Bias, journalistic endeavours, and the risks of artificial intelligence

Mark Leiser

Research output: Chapter in Book / Report / Conference proceedingChapterAcademicpeer-review

Abstract

Artificial intelligence is increasingly used throughout all processes of the news cycle. AI also has untapped corrective potential. By learning to point readers to diverse, quality, and/or legitimate news after exposure to ‘fake news’, ‘false narratives’, and disinformation, AI plays a powerful role in cleaning up the information ecosystem. Yet AI systems often ‘learn’ from training data that contains historical inaccuracies and biases, with results proven to embed discriminatory attitudes and behaviours. Because this training data often does not contain personal information, regulation of AI in the news production cycle is largely overlooked by legal commentators. Accordingly, this chapter lays out the risks and challenges that AI poses in both journalistic content creation and moderation, especially through machine-learning in the post-truth world. It also assesses the media’s rights and responsibilities for using AI in journalistic endeavours in light of the EU’s AI draft regulation legislative process.
Original languageEnglish
Title of host publicationArtificial Intelligence and the Media
Subtitle of host publicationReconsidering Rights and Responsibilities
EditorsTaina Pihlajarinne, Anette Alén-Savikko
PublisherEdward Elgar Publishing Ltd.
Chapter1
Pages8-32
Number of pages25
ISBN (Electronic)9781839109973
ISBN (Print)9781839109966
DOIs
Publication statusPublished - 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'Bias, journalistic endeavours, and the risks of artificial intelligence'. Together they form a unique fingerprint.

Cite this