Striving for Responsible AI: Governing AI Implementation through a Technological Platform

Tamara Thuis, Natalia Levina

Research output: Contribution to JournalMeeting AbstractAcademic

Abstract

The societal, technical, and organizational risks posed by artificial intelligence (AI) increasingly motivate organizations to invest in governance efforts to promote responsible AI development and use. Over three years, we studied a European telecommunications company to investigate the micro-processes of AI governance. Beyond creating new roles (e.g., governance officer) and governance bodies (e.g., ethical council), the organization saw an AI development platform as a key part of governing AI. Our findings revealed a frequent misalignment between the platform‘s intended use and its actual adoption in practice. Instead of relying on the platform, organizational actors often relied on simpler shadow tools to perform AI governance activities. Establishing, encoding, and enforcing rules around responsible AI unfolded through a series of discursive practices including frequent model review meetings, community workshops, and self-regulating standards. These practices socialized values and norms around AI and helped bridge the gap between aspiration-driven policy and day-to-day practice. Our findings highlight the role of discourse in governing through technological tools as means for overcoming limitations of bureaucratic controls.
Original languageEnglish
JournalAcademy of Management Proceedings
Volume2025
Issue number1
DOIs
Publication statusPublished - 17 Jun 2025

Fingerprint

Dive into the research topics of 'Striving for Responsible AI: Governing AI Implementation through a Technological Platform'. Together they form a unique fingerprint.

Cite this