Governing AI, Protecting the Human: Digital Humanism and the Ethics of Scholarly Communication
Download
Anthony Le Duc
leduc.anthony@asianresearchcenter.org
2026
Abstract
The rapid integration of generative AI tools into academic writing, peer review, and editorial workflows has prompted major scholarly publishers to issue policies governing their use. Yet little is known about the values and assumptions that underpin these emerging policies or how they shape global knowledge production. This study conducts a qualitative comparative analysis of the AI policies of five major publishers to examine how they articulate and regulate human agency, creativity, and responsibility in an era of accelerating automation. The publishers comprise Elsevier, Taylor & Francis, SAGE Publishing, Springer Nature, and Cambridge University Press. Drawing on the framework of digital humanism, the article argues that despite differences in restrictiveness, procedural detail, and disclosure requirements, the five publishers converge on the insistence that AI must enhance rather than replace human judgment. At the same time, divergences in policy depth, enforcement mechanisms, and institutional capacity reflect the challenges of translating humanist principles into concrete practices. The analysis also considers the implications of these policies for scholars in non-English-speaking and under-resourced contexts, where AI tools can both mitigate and exacerbate structural inequities. The article concludes by calling for collaborative, context-sensitive approaches to AI governance in scholarly communication. The paper affirms the need for ongoing dialogue among publishers, researchers, technologists, and relevant stakeholders to ensure that digital innovation and use are guided by human-centered values.
Keywords: AI policies, academic writing, academic publishing, peer review, digital humanism, AI ethics