TY - CHAP
T1 - Infringements of AI on Epistemic Autonomy
T2 - A Graded Approach
AU - Bracker, Daniel
AU - van Woudenberg, René
PY - 2026
Y1 - 2026
N2 - This chapter examines the tension between two epistemic goods: having instant access to accurate information through AI, and thinking for ourselves. If you can get any answer immediately from AI, is that copying rather than genuine understanding? We begin with Kant’s call to “use your own reason” and ask what this means—and whether we should even want it. After rejecting both extreme and weak epistemic egoism, we develop a graded, non-egoistic account of autonomy. We suggest that epistemic autonomy consists of stereotypical features that come in degrees: figuring things out yourself, trusting others with good reason, forming beliefs without bypassing your cognitive competences, and believing conscientiously. Using Alvarado’s framework of AI as epistemic technology, we then examine concrete cases. When nine-year-old Agnes uses Grammarly, her autonomy is minimal; when professional editor Agnes uses it, her autonomy remains high. The same AI system can enhance or diminish autonomy depending on the user’s competences and AI literacy. We identify “autonomy self-deception”—maintaining illusions of competence while developing genuine dependency. The opacity of AI systems poses particular challenges, requiring users to develop calibrated trust to preserve autonomy while benefiting from AI assistance.
AB - This chapter examines the tension between two epistemic goods: having instant access to accurate information through AI, and thinking for ourselves. If you can get any answer immediately from AI, is that copying rather than genuine understanding? We begin with Kant’s call to “use your own reason” and ask what this means—and whether we should even want it. After rejecting both extreme and weak epistemic egoism, we develop a graded, non-egoistic account of autonomy. We suggest that epistemic autonomy consists of stereotypical features that come in degrees: figuring things out yourself, trusting others with good reason, forming beliefs without bypassing your cognitive competences, and believing conscientiously. Using Alvarado’s framework of AI as epistemic technology, we then examine concrete cases. When nine-year-old Agnes uses Grammarly, her autonomy is minimal; when professional editor Agnes uses it, her autonomy remains high. The same AI system can enhance or diminish autonomy depending on the user’s competences and AI literacy. We identify “autonomy self-deception”—maintaining illusions of competence while developing genuine dependency. The opacity of AI systems poses particular challenges, requiring users to develop calibrated trust to preserve autonomy while benefiting from AI assistance.
UR - https://www.scopus.com/pages/publications/105026433981
UR - https://www.scopus.com/inward/citedby.url?scp=105026433981&partnerID=8YFLogxK
UR - https://www.routledge.com/Digital-Development-Technology-Ethics-and-Governance/Chen-Farina-Yu/p/book/9781032937809
U2 - 10.4324/9781003567622-6
DO - 10.4324/9781003567622-6
M3 - Chapter
AN - SCOPUS:105026433981
SN - 9781032937809
T3 - Routledge Studies in Contemporary Philosophy
SP - 77
EP - 91
BT - Digital Development
A2 - Farina, Mirko
A2 - Yu, Xiao
A2 - Chen, Jin
PB - Routledge
ER -