TY - JOUR
T1 - Helpful, harmless, honest? Sociotechnical limits of AI alignment and safety through Reinforcement Learning from Human Feedback
AU - Dahlgren Lindström, Adam
AU - Methnani, Leila
AU - Krause, Lea
AU - Ericson, Petter
AU - de Rituerto de Troya, Íñigo Martínez
AU - Coelho Mollo, Dimitri
AU - Dobbe, Roel
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/6
Y1 - 2025/6
N2 - This paper critically evaluates the attempts to align Artificial Intelligence (AI) systems, especially Large Language Models (LLMs), with human values and intentions through Reinforcement Learning from Feedback methods, involving either human feedback (RLHF) or AI feedback (RLAIF). Specifically, we show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness. Through a multidisciplinary sociotechnical critique, we examine both the theoretical underpinnings and practical implementations of RLHF techniques, revealing significant limitations in their approach to capturing the complexities of human ethics, and contributing to AI safety. We highlight tensions inherent in the goals of RLHF, as captured in the HHH principle (helpful, harmless and honest). In addition, we discuss ethically-relevant issues that tend to be neglected in discussions about alignment and RLHF, among which the trade-offs between user-friendliness and deception, flexibility and interpretability, and system safety. We offer an alternative vision for AI safety and ethics which positions RLHF approaches within a broader context of comprehensive design across institutions, processes and technological systems, and suggest the establishment of AI safety as a sociotechnical discipline that is open to the normative and political dimensions of artificial intelligence.
AB - This paper critically evaluates the attempts to align Artificial Intelligence (AI) systems, especially Large Language Models (LLMs), with human values and intentions through Reinforcement Learning from Feedback methods, involving either human feedback (RLHF) or AI feedback (RLAIF). Specifically, we show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness. Through a multidisciplinary sociotechnical critique, we examine both the theoretical underpinnings and practical implementations of RLHF techniques, revealing significant limitations in their approach to capturing the complexities of human ethics, and contributing to AI safety. We highlight tensions inherent in the goals of RLHF, as captured in the HHH principle (helpful, harmless and honest). In addition, we discuss ethically-relevant issues that tend to be neglected in discussions about alignment and RLHF, among which the trade-offs between user-friendliness and deception, flexibility and interpretability, and system safety. We offer an alternative vision for AI safety and ethics which positions RLHF approaches within a broader context of comprehensive design across institutions, processes and technological systems, and suggest the establishment of AI safety as a sociotechnical discipline that is open to the normative and political dimensions of artificial intelligence.
KW - AI ethics
KW - AI safety
KW - Artificial intelligence
KW - Human feedback
KW - Large language models
KW - Reinforcement learning
UR - https://www.scopus.com/pages/publications/105007225963
UR - https://www.scopus.com/inward/citedby.url?scp=105007225963&partnerID=8YFLogxK
U2 - 10.1007/s10676-025-09837-2
DO - 10.1007/s10676-025-09837-2
M3 - Article
AN - SCOPUS:105007225963
SN - 1388-1957
VL - 27
SP - 1
EP - 13
JO - Ethics and Information Technology
JF - Ethics and Information Technology
IS - 2
M1 - 28
ER -