COPE Forum – can peer review weather the storm?

On the 7th September 2023, the COPE Forum took place, discussing peer review models and examining the current threats to the systems and challenges faced by all parties involved.

Peer review has long been the cornerstone of scholarly publishing, serving as a quality control mechanism to ensure the accuracy and integrity of research. This communal effort involves authors, editors, publishers, and reviewers working together to uphold the standards of academic discourse. However, the peer-review process is facing unprecedented challenges that threaten its effectiveness. Here, we will discuss the importance of peer review, the emerging challenges it faces, and potential solutions to fortify this vital system.


Peer Review

Peer review plays a pivotal role in maintaining the credibility and trustworthiness of scholarly publications. Its benefits include:

  1. Quality Assurance: Peer review helps identify errors, flaws, and biases in research, ensuring that only high-quality and reliable studies are published.
  2. Validation of Findings: It serves as a validation mechanism, confirming the authenticity and significance of research findings.
  3. Feedback for Improvement: Reviewer feedback provides authors with valuable insights for improving their work.
  4. Conflict Resolution: Peer review resolves conflicts and disputes regarding research claims and methodology.


Challenges Facing Peer Review

Despite its essential role, the peer-review process is facing several challenges:

  1. Shortage of Skilled Reviewers: There is a growing scarcity of qualified reviewers willing to dedicate their time and expertise to the peer review process. This can lead to overburdened reviewers and delays in publishing.
  2. Fraud and Misconduct: Organized fraud, such as peer-review rings, fake papers, and manipulated results, threatens the integrity of peer review, undermining trust in scholarly publishing.
  3. AI and Large Language Models: The advent of AI tools and large language models has introduced new challenges, including the generation of convincing but false research papers and the potential automation of the peer-review process.


Solutions for Strengthening Peer Review

To address these challenges and preserve the integrity of peer review, several strategies can be considered:

  1. Reviewer Recognition and Training: Acknowledging and rewarding reviewers for their contributions can help motivate and retain skilled reviewers. Providing training and guidelines for reviewers can enhance the quality of their assessments.
  2. Transparency and Accountability: Journals can adopt transparent peer-review practices, such as open peer review or preprint reviews, to increase accountability and trust in the process.
  3. Technology and AI: Utilize AI tools not only to detect fraud but also to assist in the peer-review process. AI can help identify potential conflicts of interest, plagiarism, and statistical errors.
  4. Diversifying Reviewer Pools: Encourage diversity among reviewers in terms of gender, ethnicity, and geographical location to ensure a broader range of perspectives.
  5. Collaboration Among Stakeholders: Authors, editors, publishers, and reviewers should work together to establish and maintain best practices for peer review.


The peer-review process is at a critical juncture, facing challenges that threaten its efficacy and credibility. However, with concerted efforts from all stakeholders, including researchers, journals, and the broader academic community, it is possible to fortify peer review, adapt to the changing landscape, and ensure that scholarly publishing continues to uphold the highest standards of research integrity. Only through collective action can we safeguard the trust that underpins the dissemination of knowledge in academia.

What Does a Journal Administrator Actually Do?

Managing Editors, Administrators, Journal Staff, Editorial Assistants – whatever you want to call us, we play an integral role in getting your manuscript through peer review. But you may wonder what it is we actually do.

You see, we Managing Editors wear many hats. The honest answer to What It Is We Do is really that we do whatever our particular editor, publisher, and journal workflow needs us to do. But there are some tasks that are common for most of us, so here’s a quick TEH Blog rundown.

System support

Most journals these days make use of an online submission system. These systems are absolutely invaluable to the smooth running of a busy global journal (more on that here), but we are all too aware that they can be confusing and frustrating if you aren’t used to using them.

Your friendly neighbourhood Managing Editors are therefore on hand to answer any questions, resolve any upload problems, and generally support authors, reviewers, and editors in successfully navigating their way through all the buttons, links, and questions.

Administrator checks

Once you’ve submitted your manuscript (whether you needed our help do to so or not), the first thing that will happen is that somebody will check it over to make sure that nothing is missing, and that it’s suitable for peer review. And just who might that “somebody” be? You’ve guessed it: the Managing Editor.

The checks we’re asked to perform varies journal to journal. Sometimes it is literally a case of making sure the manuscript text hasn’t been missed out by mistake, and sometimes it’s an in-depth analysis of your referencing format. Whatever the checks are, it’ll be us who gets in touch to guide you through making any changes, and it’s us who will approve it for review.

Status updates

It might feel like you submitted your manuscript aaaages ago and the status in your author centre has been saying the same thing for a really long time… When the waiting game finally gets too much and you fire off an email to the journal’s Editorial Office, it’s one of us who will respond to give you some idea of what’s happening.

Unfortunately delays do happen – editors and reviewers are, after all, busy people and inevitably deadlines get missed periodically – but we are always working to keep them to a minimum, and are always happy to give you an update. You can find out more about what goes on behind the scenes here.

Point of contact

It’s not just status updates for authors that we handle, however. Been asked to review a paper but need an extension on the deadline? Drop us an email. Need to return your conflict of interest form for your accepted paper? Send it over to us. Somehow wound up with multiple accounts on the submission system that are causing you login problems? We can help with that, too.

In fact, pretty much anything you need as an author, reviewer or editor can be sent to us. If we’re unable to help you ourselves then we will know who to forward the message on to. We Managing Editors are your one-stop shop for all your peer review needs.


One of the many benefits of peer review being handled through a submission system is that we can gather data on number of submissions, how many of those get accepted, and even where in the world the research originated from.

When you’re down in the trenches working away at getting the papers assigned to you through peer review it’s not always easy to see the bigger picture, so being able to get actual figures on how many submissions are coming into your journal (and, crucially, how that compares to how many submissions you’ve received in previous years) is absolutely invaluable.

And it’s we Managing Editors who can not only get you this data, but organise it into a report that makes sense of it all.

Academic Publishing and the Rise of AI

It seems like everybody’s talking about AI these days. No longer is it just the stuff of Sci-Fi movies, it’s fast becoming a part of our day-to-day lives – writing, painting, talking; you name it, there’s an AI app available to do the hard work for you. But what impact is this having on the world of scholarly publishing?


Chatbots as co-authors

We are already seeing instances of chatbots, such as ChatGPT, being listed as co-authors on academic papers submitted to journals. This is naturally problematic – can an AI really qualify as an author?

The general consensus amongst the publishing world, for the time being at least, is that no, they can’t. We Managing Editors are now often required to check for chatbot authorship at submission so we can ask that their “contribution” to the work be listed in the Acknowledgement section rather than being given co-author status.

More information on authorship can be found in our previous post on the authorship question.


Chatbots as ghost writers

The instances where chatbots are being listed as co-authors are one thing – we are at least being told that there was AI involvement in the writing of the paper. Far more troubling are instances where papers are being predominantly or entirely written by AI and being submitted as if they had been produced by humans.

The Committee on Publishing Ethics (COPE) says that “This has significant implications for research integrity, and the need for improved means and tools to detect fraudulent research. The advent of fake papers and the systematic manipulation of peer review by individuals and organisations has led editors and publishers to create measures to identify and address several of these fraudulent behaviours. However, the detection of fake papers remains difficult as tactics and tools continue to evolve on both sides.”

One of the problems of AI is that it doesn’t have a moral or ethical code, so has no qualms about falsifying data then convincingly analysing it. This is a huge concern when it comes to the next generation of “paper mills” – groups who produce academic-looking papers for profit alone. In their hands, AI could be incredibly damaging to research as it’s not always easy to spot.

For more from COPE, see their recent discussion on this topic.


So, what’s the future?

With AI becoming more and more part of our lives, it is quite plausible that academia will embrace a little electronic help when it comes to writing papers – academics and researchers are busy people so if AI helps to reduce their workload, then why would they not take advantage of that?

The question is really where do we draw the line, and, as this technology is so new to most of us, this is a very difficult question to answer. Certainly it needs to be clearly stated when AI has been used to generate some of the text, and how involved AI has been in generating the data on which the manuscript is based.

This is something that the publishing world is monitoring closely, with many discussions of the implications on research integrity being held. The industry is working to produce tools to help us detect when a seemingly regular paper produced by human hands may actually be nothing of the sort…

Peer Review as We Know It

The peer-review process is a funny old beast. It’s an imperfect system that varies from journal to journal and everyone has an opinion on the best way to manage it: the authors should/shouldn’t be anonymous, the reviewers should/shouldn’t be rewarded, there should be a maximum of two reviewers, there should be a minimum of three… the list goes on.


But where does the concept of peer review come from – and just how long have we been deciding whether or not to publish new research in this way?


Just how old is it?

According to some sources, the concept can be traced back to ancient Greece; however it is more popularly attributed to Henry Oldenburg, the first editor of Philosophical Transactions of the Royal Society of London which launched in 1665 (fun fact – it’s still in print!). It would be roughly three centuries before peer review would really take off, however, with the academic editors making a judgement call themselves on whether or not to publish a paper. There is a famous story of Einstein being mortally offended when, in 1939, an academic editor had the audacity to consult with external reviewers on a paper he’d submitted without obtaining permission from him to share it prior to publication.


Why should deciding internally have been the norm for so long, however? Surely getting an independent set of eyes or two on new research makes sense – especially since the concept had been around for so long? Well, it may have made sense, but the problem wasn’t just cultural, it was practical.


It wasn’t so long ago that papers would have to be written on a typewriter, or even by hand. In order to be distributed, they would need to be copied out by hand. The reviewers would then need to be contacted by post and there was the danger of manuscripts/reviews being lost, thereby having to start the process of coping/sending all over again. In the majority of cases, it simply wasn’t feasible.


So what changed?

Distribution of papers amongst experts became a somewhat easier task (albeit still dependent on snail mail) with the invention of the Xerox machine. Which was just as well, as the expansion of scientific endeavours with new fields developing at an alarming rate during the 20th century, meant that it became increasingly difficult for academic editors to have enough of an overview of their fields to continue making judgment calls without seeking second opinions.


By the 1970s, external review was becoming the standard procedure, and the phrase “peer review” seems to have been coined at around this time. With the arrival of the internet – and, more importantly, email – the whole process became a far more streamlined proposition as we were now able to quickly and easily send files out to experts anywhere in the world without being at the mercy of the postage system.


More recently this has been taken one step further with most journals now running their peer review via an online submission system such as ScholarOne Manuscripts or Editorial Manager, much to the relief of those of us who remember running journals from an Excel spreadsheet. Although naturally a vast improvement on snail mail and filing cabinets, the spreadsheets/email system was not without its problems (but more on that here).


What’s next for peer review?

The interesting thing about the review process – be it external or internal – is that it’s always evolving to meet the needs of the scientific community, with new ideas being incorporated and new technologies being employed as and when they become available. So it’s hard to predict where it will go next – but we’re excited to find out!


Further reading

Peer Review – A Historical Perspective

A brief history of peer review

The Rise of Peer Review: Melinda Baldwin on the History of Refereeing at Scientific Journals and Funding Bodies

Features of Four Submission Systems

There are lots of submission systems available, all of which look wildly different but essentially do the same job – keeping all the information pertaining to your submission in one place where the editorial team can access it regardless of where they are in the world. For more information on why we use online systems to handle peer review, see our earlier post here.

At The Editorial Hub, our team predominantly works with online peer-review systems so they’re something we’re very familiar with. Here’s a quick introduction to some of our favourites!


ScholarOne Manuscripts (Clarivate)

ScholarOne (formerly Manuscript Central) is currently used by over 7,000 journals worldwide. If you’re involved in scholarly publishing in any way – be it as an author, reviewer, or editor – chances are you’ll have used ScholarOne at some point.

You know where you are with a ScholarOne system. The interface for authors and reviewers is fairly user-friendly with customisable instructions, and all the information that the editorial office needs is easily accessible. Generally speaking, ScholarOne is solid, dependable, and predictable – all good traits in a tool designed specifically to make life easier!


Editorial Manager (Aries Systems)

Also used by thousands of journals across the globe, Editorial Manager is a highly-configurable system “optimized to streamline editorial processes and communication”.

Editorial Manager has a lot of functionality and is very customisable. It also has some great menus that give you overviews of the manuscripts in progress grouped in various ways – e.g., by editor or by status – at the click of a button.


EJPress (eJournalPress)

As with the previous two, EJPress also has a lot of functionality and is “fully configurable”.

All manuscripts in progress are sorted into folders which are preceded by a big red arrow when they contain papers that are awaiting action. It has a folder containing all chasers – reminder emails for authors, reviewers, and editors – which means it’s easy for the administrator to keep an eye on all papers with overdue tasks, regardless of what stage of peer review the paper’s reached.


ReView (River Valley Technologies)

ReView is a relatively new system designed to be as user friendly as possible, with an intuitive interface that only shows users the information they need to carry out the task at hand.

One big plus for ReView is the native handling of LaTeX files, something which other systems can struggle with. It’s extremely customisable so you can tailor it to your team and their preferred workflow, and the reporting function is simple to use and provides real time data on anything you need to know.

Impact Factors

If you work in academia, you’re bound to be familiar with Impact Factors. You’ll probably know that “good journals” have an Impact Factor, and you may know that “really good journals” have a high Impact Factor. But do you know how Impact Factors are calculated? Or how journals are ranked? In short, do you know what an Impact Factor actually is?


If not, don’t worry. Consider this your Impact Factor 101.

An Impact Factor (IF) is, in essence, a fairly simple sum. A journal’s 2021 IF is calculated using citations received by that journal in 2021 for articles published in the previous two years (2019 + 2020), divided by the number of articles the journal published in those two years.

So, if in 2021 a journal received 13 citations to articles published in 2019 and 17 citations to articles published in 2020 (a total of 30) and published 35 articles over those two years; it would have a 2021 IF of 0.857.

The reason that only the previous two years are taken into consideration is simply to level the playing field. If you took into consideration citations to articles from a journal’s full history, then it would be extremely biased towards older journals who naturally have a far larger number of articles to choose from.

At the end of each year, the citations for each journal are counted and put into the Journal Citation Reports (JCR) which are published the following summer.


Which articles count towards an IF?

You have written an article published in a journal which is included in the JCR and, let’s say, it’s been published this year. Any citations your article receives next year or the year after will contribute towards the journal’s IF for those years.

Not everything published by a journal counts as “Article Content”, only Research Articles, Review Articles, Short Reports, etc. Editorials, Book Reviews, Letters to the Editor, etc, are not classed as Article Content so won’t be counted in the number of articles published. Any citations they receive the two years following publication will, however, be included in the citation count.


Alright, so what is a “good” or “high” Impact Factor?

The simple answer to that is the bigger the IF (i.e. the higher the number), the better it is. But, of course, it’s not quite as simple as that.

Different disciplines are likely to receive different levels of citations. For example, a scientific journal publishing up to the minute research in a fast-moving field is likely to receive more citations within the “IF window” (those two years after an article is published) than a humanities journal in a field which moves somewhat slower. An IF of 1.000 might be brilliant in one discipline, but be pretty poor in another.

To allow for this, journals are split into categories within the JCR and ranked within those. Therefore, rather than looking at their IF alone, to find the best journals within your field you should find the most relevant categories and see which journals rank highest within those.

For more information on the Journal Citation Reports (and Impact Factors, naturally), check out Clarivate’s website here.