Publishing with purpose: Using AI to enhance scientific discourse

Abstract

The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the “research industrial complex” – a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers’ work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication counts, funding bodies rely on traditional metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals’ core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

Metadata

  • Status: Working Paper
  • Last updated: 27 January 2025
  • Keywords: editor, generative AI, journal, research industrial complex, scientific publication
  • Peer-reviewed: No
  • Preprint: Open Science Foundation
  • Preprint DOI: 10.31219/osf.io/d7hwf_v1

Introduction

The introduction of generative AI into the research publication process presents both unprecedented opportunities and significant risks. While this technology could enhance the process of knowledge creation1, generate novel hypotheses2, and streamline the research process3, it may also reinforce and amplify existing problems in academic publishing4. One of the most fundamental of these problems is the embedding of the academic publication ecosystem within a “research industrial complex” (RIC) with a clear conflict of interest between publishers and their journals, universities, and funding organisations. This system prioritises metrics like publication counts and impact factors over meaningful contributions to knowledge creation, creating powerful incentives for researchers to focus on quantity over quality4.

These incentives drive researchers to publish at an increasing pace, at the expense of thoughtful inquiry and real-world impact. In the existing system the metrics that matter include things like the h-index and citation counts, which emphasise the degree to which the value of any publication is determined by its use by other academics, rather than anyone working in practice4. This dynamic is especially concerning in a clinical field, where research should ultimately translate into better patient care rather than a drive towards higher – and potentially meaningless – citation metrics. With the introduction of AI tools that can dramatically accelerate article production (as opposed to knowledge creation), we risk intensifying these accelerationist incentives unless we fundamentally rethink the role of journals in our scientific communities.

This viewpoint article argues that academic journals should begin a process of transformation, shifting away from their current role as publication platforms, towards learning communities that facilitate meaningful discourse and collaborative engagement. Rather than serving primarily as credentialing mechanisms for the RIC, journals should become spaces where clinicians, researchers, and educators can collectively advance knowledge at the cutting edge of our field. By reimagining journals as communities, we can harness AI’s potential to enhance scientific inquiry, ultimately producing research that better serves our patients and profession.

Research industrial complex

The concept of an “industrial complex” – first articulated by President Eisenhower in describing the military-industrial complex – helps illustrate the problematic dynamics in the academic publishing ecosystem. Just as the military industrial complex creates self-perpetuating cycles that prioritise institutional interests over the public good, today’s research publishing system has developed its own self-reinforcing patterns that work against the interests of patients and practitioners5. While the academic publishing ecosystem may not exactly mirror other industrial complexes, the analogy usefully highlights how interconnected institutional incentives can create self-perpetuating cycles that prioritise metrics over their purported missions. Just as defence contractors benefit from continued military spending regardless of strategic outcomes, publishers, universities, and funding bodies have developed mutually reinforcing systems that reward publication volume and profit over meaningful scientific progress6.

In the scientific community, this manifests in several important ways. The pressure to publish frequently leads researchers to slice their work into minimum publishable units, where researchers are incentivised to present the same data in different ways7. A single rehabilitation intervention trial might be parsed into methodology papers, preliminary results, sub-group analyses, and follow-up studies – each counting as a publication but making it harder for clinicians to synthesise the practical implications of the original work8. Researchers driven to increase their research output will use generative AI to drive this process of re-presenting data to create multiple articles. In addition, the emphasis on novel findings can discourage crucial replication studies or research on routine clinical challenges that matter deeply to practitioners but may not generate exciting headlines. Again, the use of generative AI to articulate faux novelty in persuasive narratives8 will influence what kinds of papers are published, as well as raising questions about which agendas are being privileged. And finally, the focus on quantitative publication metrics has blurred the boundary between clinically meaningful research, and research that serves, among other things, a journal’s impact factor9.

These dynamics are sustained by mutually reinforcing incentives across the research ecosystem. Universities evaluate faculty performance based largely on publication counts and journal impact factors. Funding bodies look to publication metrics when making grant decisions. Individual researchers need publications for career advancement. And journal publishers benefit from a steady stream of submissions. Each actor in this system is responding rationally to their incentives, but the collective result is a publishing culture that fails to serve the core mission of advancing scientific knowledge10. Recent developments suggest that AI may already be intensifying problematic incentives in academic publishing. For example, publishers have entered into business partnerships with companies developing frontier generative AI models, allowing models to train on their content11. These deals between publishers and AI companies represent a troubling new dynamic where the pressure to produce content quickly is driven not just by traditional academic metrics, but by commercial imperatives to generate training data for AI systems. This risks further divorcing the publication process from its core purpose of advancing scientific knowledge, creating additional incentives for quantity over quality.

While there are limitations to the ‘industrial complex’ analogy (for example, publishers are not intentionally maintaining social and healthcare problems for continued profits), it nonetheless serves as a useful framework to explore the introduction of generative AI into the scientific process. There is already evidence that generative AI will enable researchers to produce papers at unprecedented speed – conducting analyses, drafting manuscripts, and responding to reviewer comments far faster than human researchers can do on their own12. While this might seem like progress, it risks further divorcing research from careful reflection and clinical relevance. Without thoughtful guardrails, AI could amplify the volume-over-value tendency that the research industrial complex has triggered.

Reimagining journals as AI-supported communities

Scientific journals originally emerged as public records of conversations between scholars working on the most difficult problems of their time, serving as forums for ongoing dialogue13. This original vision of journals as spaces for collaborative discourse stands in stark contrast to their current role as credentialing mechanisms within the research industrial complex. We might instead aim to recapture the original spirit of journals as platforms for scientific discourse, leveraging AI’s capabilities to support conversation and dialogue, rather than using it to produce more papers with less value. The future of scientific publishing lies not in becoming more efficient article-processing platforms, but in returning to the original purpose of journals: facilitating learning communities and discourse at the cutting edge of practice.

First, generative AI could help surface meaningful connections between different researchers’ work that might otherwise go unnoticed. Instead of accelerating article production, AI might be used by journals and editors to analyse a submitted manuscript’s methodology, findings, and theoretical framework to identify relevant ongoing discussions in the journal’s community14. For example, when a researcher submits a paper on a novel healthcare intervention, an AI system could not only flag related published papers, but also highlight active discussions in other communities where clinicians are debating similar approaches, or connect them with practitioners who have documented case studies in related areas. This shifts the focus from simply adding more to the literature toward joining an existing scholarly conversation, even when that conversation is happening elsewhere. It might also encourage authors and researchers to proactively connect to those existing discussions, embedding their work more effectively within those contexts.

Second, AI could make peer review more dialogic rather than merely evaluative (or in some cases, combative). Current large language models excel at asking probing follow-up questions and identifying unstated assumptions, which might help support all parties in a review process. This capability could be used to facilitate structured discussions between authors and reviewers, where the AI helps articulate points of confusion, surfaces potential counterarguments, and suggests areas where additional practitioner perspectives might be valuable, as part of a multi-turn, long-context dialogue15. Instead of reviewers (and more likely, AI-based review systems) simply judging if a paper meets the publication criteria16, the review process could facilitate a genuine dialogue about strengthening the work’s contribution to practice, while being supported by a critical yet constructive language model fine-tuned for peer review.

Third, AI could help make post-publication discourse more focused and productive. Rather than having fragmented discussions across unrelated journals, comment sections, social media, and separate response papers, AI could help organise and synthesise ongoing scholarly conversations about published work, in a special section of the journal17. It could identify key themes in practitioner responses, highlight emerging consensus or disagreements, and help authors engage systematically with how their work is being interpreted and applied. This maintains the paper as a living document within an active community rather than a static artifact, where the AI-generated content is regenerated and updated in response to new publications and discussion from across the internet.

And finally, generative AI’s multimodal capabilities offer unique opportunities to make research more accessible and engaging across different contexts and audiences. Rather than limiting research dissemination to traditional PDF formats, journals could use AI to automatically generate alternative presentations of published work – from simplified explanations for practitioners and public audiences18, to translations in multiple languages19, to audio versions for listening while otherwise occupied20, and eventually, video summaries that visualise key concepts and findings21. This multimodal approach would help break down barriers to accessing scholarly work while enabling researchers to engage with the literature in ways that best suit their needs and preferences. However, the focus should remain on meaningful knowledge translation rather than simply multiplying formats – each alternative presentation should be thoughtfully designed to serve the journal’s community and advance scholarly discourse. This reimagining of how research is shared could help journals fulfil their core purpose of facilitating learning and dialogue.

The key to these examples is to think of AI as a support for human-to-human scientific discourse and engagement, rather than a means of producing more articles, more quickly. This reimagining requires us to change our perspective of what a “journal” is. Rather than maintaining their gatekeeping and repository functions, scientific journals could become more dynamic spaces that foster ongoing dialogue between researchers, clinicians, educators, and society. These spaces would support multiple formats for sharing knowledge, with the aim of enhancing thoughtful human connection and understanding. But most importantly, they would redefine success through evidence of real-world impact rather than meaningless citation counts. The value of AI is that it can help process and connect information at scale, enabling researchers to better embed their findings within broader contexts. The role of generative AI in scientific publishing should therefore be to support meaningful scholarly dialogue rather than obscure it with an avalanche of AI-generated content that serves only to feed the research industrial complex.

Implementing the community model

The transformation of scientific journals from gatekeepers and content repositories into vibrant knowledge communities requires a fundamental reimagining of editorial roles and processes. This transformation represents not merely an adaptation to technological change, but rather a return to journals’ original purpose of facilitating learning and discourse in practice. In this section I propose several interconnected changes that journal editors might consider implementing as part of the journal transformation process.

First, editors might reconceptualise peer review as an ongoing scholarly conversation , which takes as its starting point the practice of post-publication peer review22. The traditional model of sequential rounds of peer review could be replaced with dynamic community dialogue, where standing review communities include generative AI agents, researchers, and practitioners, who engage with manuscripts from submission through publication and beyond15. Artificial intelligence can also support this transformation by identifying relevant reviewers based on expertise and interests, surfacing meaningful connections between submitted work and existing discussions in the community14, and facilitating structured conversations by articulating points of confusion and potential counterarguments. This approach transforms peer review from a primarily evaluative process into a collaborative endeavour where AI acts as a mediator and guide, working alongside human reviewers to strengthen both the immediate work and its relevance to the broader scientific discourse.

The scope of publication formats must likewise expand beyond traditional research articles to accommodate a wider range of scholarly communication. Editors can create dedicated spaces for pre-publication discussion of protocols and methodologies, implementation experiences and case studies, practitioner perspectives, and ongoing discussions of published work. Again, AI can support this diversification by generating different versions of content tailored to various audiences while maintaining consistency, creating regularly updated topic-specific synthesis pages, and facilitating knowledge translation across languages and formats. These expanded formats acknowledge that scientific knowledge creation occurs through multiple channels and that different audiences may require different presentations of the same underlying research. This would also move us away from the idea that researchers should be rewarded per unit of publication, and that engagement with transformed research findings may be more meaningful than simplistic publication metrics.

The role of editorial boards should also evolve away from primarily managing manuscript flow – which could be managed by dedicated AI agents – to actively cultivating the community. Board members could organise virtual journal clubs and themed discussions, connect researchers with relevant practitioners, educators, students, and other stakeholders, thereby creating opportunities for collaboration and sustained engagement. AI systems can assist by identifying emerging themes within the community that require deeper exploration, generating discussion summaries, connecting participants with shared interests, and tracking ongoing conversations. This shift positions editorial board members as facilitators of scientific discourse rather than merely arbiters of publication decisions and ‘finders of peer-reviewers’.

Fundamental to this transformation is the development of new impact metrics that better reflect journals’ expanded role in the scientific community. Rather than relying primarily on citation counts and other quantitative metrics unrelated to real-world impact, journals should work with new stakeholders to create systems that track more authentic forms of research implementation, for example, changes in practice, and community engagement across multiple platforms. Instead of focusing primarily on citation among academics as the principal measure of impact, editors could use AI to analyse a wide range of other signals of real-world engagement, tracking how research findings are influencing practice, and identifying patterns in community engagement and implementation. These new metrics would provide a more nuanced and meaningful assessment of journal – and researcher – contribution to scientific progress.

Success in this transformation requires sustained commitment from the existing stakeholders in the scientific publishing ecosystem, as well as new stakeholders, including patients, service users, and members of society. Journal editors can lead the way in implementing these changes, but authors, reviewers, and practitioners must also adapt their approach to scientific communication. Authors and researchers should view publication not as the endpoint of research but as the beginning of a dialogue with their professional communities. Reviewers will need to embrace their role as contributors to ongoing scientific conversations rather than simply being called on to judge the quality of submitted manuscripts. And practitioners should actively engage in discussions about research priorities, methodology, and implementation, bringing their practical experience to bear on both the questions being asked and the ways that findings are translated into practice.

The integration of AI tools throughout this transformation must be guided by a clear understanding of their proper role: to enhance human networks and understanding rather than reinforcing the acceleration of article publication. By thoughtfully implementing these changes, journals and editors can better serve their communities while advancing scientific knowledge and improving clinical outcomes. This evolution represents not just an adaptation to this oncoming technological change, but a return to the fundamental purpose of scientific journals as platforms for scientific discourse and professional development.

Navigating challenges in the transition

The transformation of scientific journals into knowledge communities faces significant systemic challenges, deeply rooted in what I have been calling the research industrial complex. This interconnected system of incentives, metrics, and institutional practices and norms has created a self-perpetuating cycle that prioritises publication volume over other, potentially more impactful, practices and activities. And the introduction of generative AI into this ecosystem will only reinforce an accelerationist agenda that will quickly overwhelm scientific journals and their existing workflows and processes. Understanding and addressing these systemic constraints is important for any meaningful transformation of scientific publishing.

The most fundamental challenge lies in the deeply entrenched nature of current publication metrics within the broader academic ecosystem, and the use of those metrics to inform promotion decisions, grant application success, and institutional reputation. Universities, funding bodies, and promotion committees rely heavily on quantitative metrics like publication count and journal impact factors, creating powerful institutional resistance to change23. While individual journal editors might embrace the community-centred approaches suggested here, researchers – especially early-career academics – face strong systemic pressures to prioritise traditional publication metrics over meaningful engagement with their scientific communities. This path, reinforced by the introduction of generative AI, will further entrench misaligned incentives that work against the substantive reform of scientific communication and publication.

Within the current metrics-driven system, authors will be pressured into using AI to accelerate article production and processing, further feeding the “publish or perish” mentality that already distorts academic incentives. Journal editors might respond to these AI advances by simply adapting their workflows to process more papers faster, seeing increased throughput as a competitive advantage relative to other journals. Given other challenges, like finding peer reviewers and managing the increased frequency of submissions, this risks creating an AI-driven paper mill24 that serves career advancement metrics and publisher revenues while further degrading the quality of scientific discourse. The challenge for journal editors is to resist this pressure and instead harness AI’s capabilities to deepen, rather than hasten, professional dialogue.

The current publishing system generates significant revenue through traditional article processing charges and subscription models, creating financial disincentives for fundamental change. Maintaining active knowledge communities requires more sustained investment than simply processing manuscripts, yet traditional funding models do not currently support this shift. While AI could help manage some aspects of community engagement more efficiently, a transition period would require significant investment in new infrastructure and processes, and the incentives within the research industrial complex are not aligned with this new vision of scientific publication. Individual researchers or journals attempting to break free from traditional metrics-driven publishing face significant risks. However, there is hope. Forward-thinking editors and publishers could better harness the intellectual work of scientific publication that is currently freely given by engaged academics. Instead of volunteering their time for peer review in a model that serves mainly to increase the profit margins of already wealthy publishers, academics may be more likely to seek out opportunities to grow and develop as part of these new communities of learning centred around scientific journals.

This transformation is further complicated by the collective action problem – while most stakeholders might agree that the current system is sub-optimal (or actively harmful), no individual researcher, journal, or institution can safely step away from traditional metrics and publication practices while everyone else continues to use them. A researcher who focuses on ‘building community’ rather than increasing their publication rate risks being denied promotion. A journal that slows down to emphasise thoughtful discourse might see submissions decline. And universities that ignore traditional metrics could find themselves dropping in institutional rankings. Making meaningful change therefore requires coordinated action across multiple stakeholders, who must be willing to accept short-term costs for long-term systemic benefits. This coordination challenge helps explain why many previous attempts at reform have struggled to gain traction despite widespread recognition of the system’s flaws. And while the system remains largely unchanged, I see the potential for generative AI to act in forcing function, as a new variable to be considered. The first step may require an acknowledgement that traditional academic impact metrics are meaningless relative to the real-world impact of improving society.

Throughout all these challenges, the central question remains: how does the scientific community ensure that any reforms influenced by the integration of generative AI serve to enhance rather than merely accelerate scientific discourse? The answer lies in consistently returning to the core purpose of journals, which is to advance scientific knowledge and improve practice through thoughtful research and professional dialogue. This requires actively resisting the pressures of the research industrial complex while building new, AI-supported systems that better serve the needs of scientific communities and the broader areas of society they serve.

Conclusion

The rise of generative AI marks a critical juncture for scientific publishing. While AI could accelerate existing problems within the research industrial complex by enabling ever-faster article production, it also presents an opportunity to fundamentally reimagine scientific journals and the communities they serve. Rather than using AI to simply process more papers, more quickly, we can use these technologies to transform journals from metrics-driven repositories of content into vibrant communities that facilitate meaningful discourse and collaborative learning. This transformation will require coordinated action across the scientific ecosystem, with authors, journals, institutions, and funding bodies working together to prioritise genuine scholarly dialogue over publication counts. The future of scientific publishing lies not in using AI to accelerate publication, but to support the kind of thoughtful, community-driven discourse that advances both knowledge and practice.


References

  1. Khalifa M, Albadawy M. Using artificial intelligence in academic writing and research: An essential productivity tool. Comput Methods Programs Biomed Update. 2024;5:100145. doi:10.1016/j.cmpbup.2024.100145
  2. Si C, Yang D, Hashimoto T. Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers. Published online September 6, 2024. Accessed September 10, 2024. http://arxiv.org/abs/2409.04109
  3. Bogin B, Yang K, Gupta S, et al. SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories. Published online September 11, 2024. Accessed October 4, 2024. http://arxiv.org/abs/2409.07440
  4. Chu JSG, Evans JA. Slowed canonical progress in large fields of science. Proc Natl Acad Sci. 2021;118(41). doi:10/gm2qmh
  5. Ezell JM. The Health Disparities Research Industrial Complex. Soc Sci Med. 2024;351:116251. doi:10.1016/j.socscimed.2023.116251
  6. Buranyi S. Is the staggeringly profitable business of scientific publishing bad for science? The Guardian. https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science. June 27, 2017. Accessed November 17, 2021.
  7. Norman G. Data dredging, salami-slicing, and other successful strategies to ensure rejection: twelve tips on how to not get your paper published. Adv Health Sci Educ. 2014;19(1):1-5. doi:10.1007/s10459-014-9494-8
  8. Büttner F, Ardern CL, Blazey P, et al. Counting publications and citations is not just irrelevant: it is an incentive that subverts the impact of clinical research. Br J Sports Med. 2021;55(12):647-648. doi:10/ghq4z3
  9. Callaway E. Beat it, impact factor! Publishing elite turns against controversial metric. Nature. 2016;535(7611):210-211. doi:10/b8wb
  10. Edwards MA, Roy S. Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environ Eng Sci. 2017;34(1):51-61. doi:10.1089/ees.2016.0223
  11. Jack P. Academic backlash as publisher lets Microsoft train AI on papers. Times Higher Education (THE). July 30, 2024. Accessed December 18, 2024. https://www.timeshighereducation.com/news/academic-backlash-publisher-lets-microsoft-train-ai-papers
  12. Noy S, Zhang W. Experimental evidence on the productivity effects of generative artificial intelligence. Science. 2023;381(6654):187-192. doi:10.1126/science.adh2586
  13. Baldwin M. Peer Review. Encycl Hist Sci. 2020;4(1). doi:10.34758/srde-jw27
  14. Liu F, Xue S, Wu J, et al. Deep Learning for Community Detection: Progress, Challenges and Opportunities. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization; 2020:4981-4987. doi:10.24963/ijcai.2020/693
  15. Tan C, Lyu D, Li S, et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. Published online June 9, 2024. doi:10.48550/arXiv.2406.05688
  16. Checco A, Bracciale L, Loreti P, Pinfield S, Bianchi G. AI-assisted peer review. Humanit Soc Sci Commun. 2021;8(1):1-11. doi:10.1057/s41599-020-00703-8
  17. Jiang Y, Shao Y, Ma D, Semnani SJ, Lam MS. Into the Unknown Unknowns: Engaged Human Learning through Participation in Language Model Agent Conversations. Published online October 17, 2024. doi:10.48550/arXiv.2408.15232
  18. Ayre J, Mac O, McCaffery K, et al. New Frontiers in Health Literacy: Using ChatGPT to Simplify Health Information for People in the Community. J Gen Intern Med. 2024;39(4):573-577. doi:10.1007/s11606-023-08469-w
  19. Lion KC, Lin YH, Kim T. Artificial Intelligence for Language Translation: The Equity Is in the Details. JAMA. 2024;332(17):1427-1428. doi:10.1001/jama.2024.15296
  20. Dihan QA, Nihalani BR, Tooley AA, Elhusseiny AM. Eyes on Google’s NotebookLM: using generative AI to create ophthalmology podcasts with a single click. Eye. Published online November 20, 2024:1-2. doi:10.1038/s41433-024-03481-8
  21. Liu Y, Zhang K, Li Y, et al. Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models. Published online April 17, 2024. doi:10.48550/arXiv.2402.17177
  22. Hunter J. Post-Publication Peer Review: Opening Up Scientific Conversation. Front Comput Neurosci. 2012;6. doi:10.3389/fncom.2012.00063
  23. Lindner MD, Torralba KD, Khan NA. Scientific productivity: An exploratory study of metrics and incentives. PLOS ONE. 2018;13(4):e0195321. doi:10.1371/journal.pone.0195321
  24. Eaton SE, Soulière M. Artificial intelligence (AI) and fake papers. COPE: Committee on Publication Ethics. March 23, 2023. Accessed December 19, 2024. https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers

Appendix: AI writing disclosure


AI assistance disclosure: This article was written with the assistance of Claude, a large language model developed by Anthropic. The writing process involved multiple rounds of dialogue with the model, where initial ideas and arguments were iteratively refined through discussion. The model was used to help structure arguments, expand on key points, and improve clarity of expression. Some portions of the text were initially generated by Claude and then critically reviewed, edited, and integrated into the final manuscript. This collaborative process aimed to maintain scholarly rigour while leveraging AI capabilities to enhance the articulation of complex ideas. The author maintained full editorial control and responsibility for the final content, arguments, and conclusions presented in this article, and is ultimately accountable for what is written. This disclosure aligns with what I believe are emerging best practices for transparency in AI-assisted academic writing.

Index