Academic Publishing and the Rise of AI

May 15, 2023 | Scholarly publishing

It seems like everybody’s talking about AI these days. No longer is it just the stuff of Sci-Fi movies, it’s fast becoming a part of our day-to-day lives – writing, painting, talking; you name it, there’s an AI app available to do the hard work for you. But what impact is this having on the world of scholarly publishing?

 

Chatbots as co-authors

We are already seeing instances of chatbots, such as ChatGPT, being listed as co-authors on academic papers submitted to journals. This is naturally problematic – can an AI really qualify as an author?

The general consensus amongst the publishing world, for the time being at least, is that no, they can’t. We Managing Editors are now often required to check for chatbot authorship at submission so we can ask that their “contribution” to the work be listed in the Acknowledgement section rather than being given co-author status.

More information on authorship can be found in our previous post on the authorship question.

 

Chatbots as ghost writers

The instances where chatbots are being listed as co-authors are one thing – we are at least being told that there was AI involvement in the writing of the paper. Far more troubling are instances where papers are being predominantly or entirely written by AI and being submitted as if they had been produced by humans.

The Committee on Publishing Ethics (COPE) says that “This has significant implications for research integrity, and the need for improved means and tools to detect fraudulent research. The advent of fake papers and the systematic manipulation of peer review by individuals and organisations has led editors and publishers to create measures to identify and address several of these fraudulent behaviours. However, the detection of fake papers remains difficult as tactics and tools continue to evolve on both sides.”

One of the problems of AI is that it doesn’t have a moral or ethical code, so has no qualms about falsifying data then convincingly analysing it. This is a huge concern when it comes to the next generation of “paper mills” – groups who produce academic-looking papers for profit alone. In their hands, AI could be incredibly damaging to research as it’s not always easy to spot.

For more from COPE, see their recent discussion on this topic.

 

So, what’s the future?

With AI becoming more and more part of our lives, it is quite plausible that academia will embrace a little electronic help when it comes to writing papers – academics and researchers are busy people so if AI helps to reduce their workload, then why would they not take advantage of that?

The question is really where do we draw the line, and, as this technology is so new to most of us, this is a very difficult question to answer. Certainly it needs to be clearly stated when AI has been used to generate some of the text, and how involved AI has been in generating the data on which the manuscript is based.

This is something that the publishing world is monitoring closely, with many discussions of the implications on research integrity being held. The industry is working to produce tools to help us detect when a seemingly regular paper produced by human hands may actually be nothing of the sort…