Tuesday, 28 May 2024 11:10

Generative AI cannot be an author but... can it be a reviewer?

Written by Kate Mc Intyre

 AI reviewer UniSIG

Interactive generative artificial intelligence (AI) tools fuelled by large language models (LLMs) such as ChatGPT have rapidly been adopted by scientific researchers worldwide. Given the surprising facility of the newest generations of these tools to generate and refine texts, a number of scientific journals, publishers and related organizations have now established rules that generative AI tools cannot be listed as co-authors because they cannot be held accountable for the content. Beyond that restriction, however, many publishers and journals have taken a pragmatic approach in which authors may use these tools as long as they transparently acknowledge their use and account for how they used them (whether in writing, summarizing, editing, coding, etc.)

Dr Vasiliki (Vicky) Mollaki’s online presentation to UniSIG ‘Generative AI cannot be an author but... can it be a reviewer? Beyond publishing policies on AI’ on 26 April 2024 covered the evolving ethical landscape around another potential use of generative AI tools – peer review. The capacity of these tools to rapidly summarize and even critique text makes them tempting helpmates for reviewers, but this brings up ethical and legal questions about the integrity of the review provided and whether reviewer use of a tool could violate data security or privacy laws or author confidentiality and proprietary rights.

At the core of Vicky’s talk was her research exploring the rules that have been established so far and what needs to happen in the future. Her work was inspired by a triggering event in which an author at a journal where she is an editor brought it to the editorial team’s attention that one review appeared to be written using a generative AI tool. This was both a concern for the author and a challenge for the editors because at that time there was neither a protocol for what to do in this scenario nor a clear source of guidance.

Vicky’s subsequent research into what guidelines were already in place found that only two of the ten largest scientific publishers had stated policies on generative AI use by reviewers, while the Committee on Publication Ethics (COPE) only provided guidelines for author use and the World Association of Medical Editors (WAME) has updated its guidelines to cover use by editors and reviewers (see links below).

As Vicky carefully laid in out her presentation, this raises many questions. While the rules developed for reviewer tool use could follow the full disclosure model applied to authors, given the privacy and legality issues, it might even be necessary to ban the use of generative AI by reviewers. Either scenario begs the very difficult question of how journal editors or authors would detect and prove violations of the rules. It also requires that there be ways to enforce rules and consequences for violators. This sounds draconian, but as Vicky highlighted, not addressing this issue could jeopardize reviewer autonomy, the trust relationship between reviewers and editors, and authors’ trust in the peer review process.

Vicky’s talk was delivered online from a warm but dusty Athens to an avid audience of blanket- and scarf-clad participants spread across northern Italy, the Netherlands, Germany and Finland. The subsequent discussion was lively, with participants discussing the challenge of how to identify if a reviewer used generative AI, the policies that funding agencies have developed about their use in review, and even a possible future in which smaller journal-based AI LLM tools are built into the review process so that reviewers can access their benefits while still respecting authors’ rights.

Dr Vasiliki (Vicky) Mollaki is a scientific officer at the National Commission for Bioethics and Technoethics in Athens. She has degrees in genetics from Cardiff and Sheffield universities. Dr Mollaki is on the editorial board of the journal Bioethica and has been an external ethics expert in the European Commission since 2016.

Links

Mollaki, V. (2024). ‘Death of a reviewer or death of peer review integrity? the challenges of using AI tools in peer reviewing and the need to go beyond publishing policies’. Research Ethics, 20(2), 239–250. https://doi.org/10.1177/17470161231224552

World Association of Medical Editors statement on ‘Chatbots, Generative AI, and Scholarly Manuscripts – WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications’. https://wame.org/page3.php?id=106

Committee on Publication Ethics statement on ‘Authorship and AI tools’. https://publicationethics.org/cope-position-statements/ai-author

The Dutch Research Council on ‘NWO’s preliminary position on generative AI in the application and review process’. https://www.nwo.nl/en/nwos-preliminary-position-on-generative-ai-in-the-application-and-review-process

Blog post by: Kate Mc Intyre

Website: kate-mcintyre

Twitter (X): McintyreGenEd

 

Read 459 times Last modified on Tuesday, 28 May 2024 11:40

Other blog articles