Towards the end of September, Peer Review Week hosted a whole host of content around the subject of peer review, its processes, and editorial policies, as well as highlighting updates and current threats to the system that underpins academic publishing.
Editage, in partnership with EASE, organised a discussion entitled “Envisioning a Hybrid Model of Peer Review: Integrating AI with reviewers, publishers, & authors”, led by Chris Leonard (Director of Strategy and Innovation, Cactus Communications). He was joined by Serge P.J.M. Horbach (Institute for Science in Society, Radboud University, Netherlands), Hasseb Irfanullah (independent consultant on environment, climate change, and research systems) and Marie E. McVeigh (Lead, Peer Review Operations and Publication Integrity, Mary Ann Liebert Inc).
The discussion started with highlighting problems with peer review today; noting that we are engaging with a mid-20th Century system of “gatekeeping” which has developed into one setting standards for research, a space to develop work and ideas, where communities can collaborate and a place to discuss what it means to create “good” standards. A major issue with the peer-review process, as highlighted by Serge, is finding quality reviewers – the number of invitations required has increased due to subject specificity and interdisciplinary fields overlapping. The volume and vast number of articles exacerbates this issue and small communities of interest can no longer support growing areas of academic research alone. This can lead to exclusivity and the network of reviewers within it is diminishing.
Hasseb raised the question of whether peer review has become overrated? If a manuscript’s decision can be made on the outcome of two reviewer reports – does this undermine the whole of the research? As peer review is not valued necessarily by the publisher in a financial sense, the value of its contribution is lost. However, it is important to understand that the communication around the research does not end with the peer-review process, it starts upon publication.
As the discussion progressed, the focus turned towards a hybrid approach to peer review and what that means. A hybrid approach could equate to a generative AI and human reviewer contributing towards a reviewer report, while the Journal Editor provides a final commentary. This assistance would provide “free labour” and generative AI is a good bibliographical research tool. Marie posed the suggestion that in cases such as this, it is ideal to allow machines to do what machines can do and allow humans to engage with the outcomes. For example, AI would be great at screening manuscripts, analysing citations and identifying peer groups. Such routine and rule-based tasks can be done quickly and efficiently. Evaluations can then be conducted by humans, who are able to decipher and assess whether the article adds to the scholarly record. As human reviewers cannot be located quickly enough, this dual aspect might be the quickest, cost-efficient way to support peer-review processes going forwards.
As journals look to lean on AI technologies, we need to understand what a journal is and what it does. Is it still simply a facet to share and disseminate work and ideas, or is it becoming more than that – a place where communities engage and develop their insights? By involving AI, do we consider it a peer? If community is at the core of journal publishing, surely humans will be required to keep that sense of togetherness ongoing. Without it, it’s just computers talking to each other.