What Does a Journal Administrator Actually Do?

Managing Editors, Administrators, Journal Staff, Editorial Assistants – whatever you want to call us, we play an integral role in getting your manuscript through peer review. But you may wonder what it is we actually do.

You see, we Managing Editors wear many hats. The honest answer to What It Is We Do is really that we do whatever our particular editor, publisher, and journal workflow needs us to do. But there are some tasks that are common for most of us, so here’s a quick TEH Blog rundown.

System support

Most journals these days make use of an online submission system. These systems are absolutely invaluable to the smooth running of a busy global journal (more on that here), but we are all too aware that they can be confusing and frustrating if you aren’t used to using them.

Your friendly neighbourhood Managing Editors are therefore on hand to answer any questions, resolve any upload problems, and generally support authors, reviewers, and editors in successfully navigating their way through all the buttons, links, and questions.

Administrator checks

Once you’ve submitted your manuscript (whether you needed our help do to so or not), the first thing that will happen is that somebody will check it over to make sure that nothing is missing, and that it’s suitable for peer review. And just who might that “somebody” be? You’ve guessed it: the Managing Editor.

The checks we’re asked to perform varies journal to journal. Sometimes it is literally a case of making sure the manuscript text hasn’t been missed out by mistake, and sometimes it’s an in-depth analysis of your referencing format. Whatever the checks are, it’ll be us who gets in touch to guide you through making any changes, and it’s us who will approve it for review.

Status updates

It might feel like you submitted your manuscript aaaages ago and the status in your author centre has been saying the same thing for a really long time… When the waiting game finally gets too much and you fire off an email to the journal’s Editorial Office, it’s one of us who will respond to give you some idea of what’s happening.

Unfortunately delays do happen – editors and reviewers are, after all, busy people and inevitably deadlines get missed periodically – but we are always working to keep them to a minimum, and are always happy to give you an update. You can find out more about what goes on behind the scenes here.

Point of contact

It’s not just status updates for authors that we handle, however. Been asked to review a paper but need an extension on the deadline? Drop us an email. Need to return your conflict of interest form for your accepted paper? Send it over to us. Somehow wound up with multiple accounts on the submission system that are causing you login problems? We can help with that, too.

In fact, pretty much anything you need as an author, reviewer or editor can be sent to us. If we’re unable to help you ourselves then we will know who to forward the message on to. We Managing Editors are your one-stop shop for all your peer review needs.

Reporting

One of the many benefits of peer review being handled through a submission system is that we can gather data on number of submissions, how many of those get accepted, and even where in the world the research originated from.

When you’re down in the trenches working away at getting the papers assigned to you through peer review it’s not always easy to see the bigger picture, so being able to get actual figures on how many submissions are coming into your journal (and, crucially, how that compares to how many submissions you’ve received in previous years) is absolutely invaluable.

And it’s we Managing Editors who can not only get you this data, but organise it into a report that makes sense of it all.

Academic Publishing and the Rise of AI

It seems like everybody’s talking about AI these days. No longer is it just the stuff of Sci-Fi movies, it’s fast becoming a part of our day-to-day lives – writing, painting, talking; you name it, there’s an AI app available to do the hard work for you. But what impact is this having on the world of scholarly publishing?

 

Chatbots as co-authors

We are already seeing instances of chatbots, such as ChatGPT, being listed as co-authors on academic papers submitted to journals. This is naturally problematic – can an AI really qualify as an author?

The general consensus amongst the publishing world, for the time being at least, is that no, they can’t. We Managing Editors are now often required to check for chatbot authorship at submission so we can ask that their “contribution” to the work be listed in the Acknowledgement section rather than being given co-author status.

More information on authorship can be found in our previous post on the authorship question.

 

Chatbots as ghost writers

The instances where chatbots are being listed as co-authors are one thing – we are at least being told that there was AI involvement in the writing of the paper. Far more troubling are instances where papers are being predominantly or entirely written by AI and being submitted as if they had been produced by humans.

The Committee on Publishing Ethics (COPE) says that “This has significant implications for research integrity, and the need for improved means and tools to detect fraudulent research. The advent of fake papers and the systematic manipulation of peer review by individuals and organisations has led editors and publishers to create measures to identify and address several of these fraudulent behaviours. However, the detection of fake papers remains difficult as tactics and tools continue to evolve on both sides.”

One of the problems of AI is that it doesn’t have a moral or ethical code, so has no qualms about falsifying data then convincingly analysing it. This is a huge concern when it comes to the next generation of “paper mills” – groups who produce academic-looking papers for profit alone. In their hands, AI could be incredibly damaging to research as it’s not always easy to spot.

For more from COPE, see their recent discussion on this topic.

 

So, what’s the future?

With AI becoming more and more part of our lives, it is quite plausible that academia will embrace a little electronic help when it comes to writing papers – academics and researchers are busy people so if AI helps to reduce their workload, then why would they not take advantage of that?

The question is really where do we draw the line, and, as this technology is so new to most of us, this is a very difficult question to answer. Certainly it needs to be clearly stated when AI has been used to generate some of the text, and how involved AI has been in generating the data on which the manuscript is based.

This is something that the publishing world is monitoring closely, with many discussions of the implications on research integrity being held. The industry is working to produce tools to help us detect when a seemingly regular paper produced by human hands may actually be nothing of the sort…

Peer Review as We Know It

The peer-review process is a funny old beast. It’s an imperfect system that varies from journal to journal and everyone has an opinion on the best way to manage it: the authors should/shouldn’t be anonymous, the reviewers should/shouldn’t be rewarded, there should be a maximum of two reviewers, there should be a minimum of three… the list goes on.

 

But where does the concept of peer review come from – and just how long have we been deciding whether or not to publish new research in this way?

 

Just how old is it?

According to some sources, the concept can be traced back to ancient Greece; however it is more popularly attributed to Henry Oldenburg, the first editor of Philosophical Transactions of the Royal Society of London which launched in 1665 (fun fact – it’s still in print!). It would be roughly three centuries before peer review would really take off, however, with the academic editors making a judgement call themselves on whether or not to publish a paper. There is a famous story of Einstein being mortally offended when, in 1939, an academic editor had the audacity to consult with external reviewers on a paper he’d submitted without obtaining permission from him to share it prior to publication.

 

Why should deciding internally have been the norm for so long, however? Surely getting an independent set of eyes or two on new research makes sense – especially since the concept had been around for so long? Well, it may have made sense, but the problem wasn’t just cultural, it was practical.

 

It wasn’t so long ago that papers would have to be written on a typewriter, or even by hand. In order to be distributed, they would need to be copied out by hand. The reviewers would then need to be contacted by post and there was the danger of manuscripts/reviews being lost, thereby having to start the process of coping/sending all over again. In the majority of cases, it simply wasn’t feasible.

 

So what changed?

Distribution of papers amongst experts became a somewhat easier task (albeit still dependent on snail mail) with the invention of the Xerox machine. Which was just as well, as the expansion of scientific endeavours with new fields developing at an alarming rate during the 20th century, meant that it became increasingly difficult for academic editors to have enough of an overview of their fields to continue making judgment calls without seeking second opinions.

 

By the 1970s, external review was becoming the standard procedure, and the phrase “peer review” seems to have been coined at around this time. With the arrival of the internet – and, more importantly, email – the whole process became a far more streamlined proposition as we were now able to quickly and easily send files out to experts anywhere in the world without being at the mercy of the postage system.

 

More recently this has been taken one step further with most journals now running their peer review via an online submission system such as ScholarOne Manuscripts or Editorial Manager, much to the relief of those of us who remember running journals from an Excel spreadsheet. Although naturally a vast improvement on snail mail and filing cabinets, the spreadsheets/email system was not without its problems (but more on that here).

 

What’s next for peer review?

The interesting thing about the review process – be it external or internal – is that it’s always evolving to meet the needs of the scientific community, with new ideas being incorporated and new technologies being employed as and when they become available. So it’s hard to predict where it will go next – but we’re excited to find out!

 

Further reading

https://blogs.scientificamerican.com/information-culture/the-birth-of-modern-peer-review/

Peer Review – A Historical Perspective

A brief history of peer review

The Rise of Peer Review: Melinda Baldwin on the History of Refereeing at Scientific Journals and Funding Bodies

Features of Four Submission Systems

There are lots of submission systems available, all of which look wildly different but essentially do the same job – keeping all the information pertaining to your submission in one place where the editorial team can access it regardless of where they are in the world. For more information on why we use online systems to handle peer review, see our earlier post here.

At The Editorial Hub, our team predominantly works with online peer-review systems so they’re something we’re very familiar with. Here’s a quick introduction to some of our favourites!

 

ScholarOne Manuscripts (Clarivate)

ScholarOne (formerly Manuscript Central) is currently used by over 7,000 journals worldwide. If you’re involved in scholarly publishing in any way – be it as an author, reviewer, or editor – chances are you’ll have used ScholarOne at some point.

You know where you are with a ScholarOne system. The interface for authors and reviewers is fairly user-friendly with customisable instructions, and all the information that the editorial office needs is easily accessible. Generally speaking, ScholarOne is solid, dependable, and predictable – all good traits in a tool designed specifically to make life easier!

 

Editorial Manager (Aries Systems)

Also used by thousands of journals across the globe, Editorial Manager is a highly-configurable system “optimized to streamline editorial processes and communication”.

Editorial Manager has a lot of functionality and is very customisable. It also has some great menus that give you overviews of the manuscripts in progress grouped in various ways – e.g., by editor or by status – at the click of a button.

 

EJPress (eJournalPress)

As with the previous two, EJPress also has a lot of functionality and is “fully configurable”.

All manuscripts in progress are sorted into folders which are preceded by a big red arrow when they contain papers that are awaiting action. It has a folder containing all chasers – reminder emails for authors, reviewers, and editors – which means it’s easy for the administrator to keep an eye on all papers with overdue tasks, regardless of what stage of peer review the paper’s reached.

 

ReView (River Valley Technologies)

ReView is a relatively new system designed to be as user friendly as possible, with an intuitive interface that only shows users the information they need to carry out the task at hand.

One big plus for ReView is the native handling of LaTeX files, something which other systems can struggle with. It’s extremely customisable so you can tailor it to your team and their preferred workflow, and the reporting function is simple to use and provides real time data on anything you need to know.

Impact Factors

If you work in academia, you’re bound to be familiar with Impact Factors. You’ll probably know that “good journals” have an Impact Factor, and you may know that “really good journals” have a high Impact Factor. But do you know how Impact Factors are calculated? Or how journals are ranked? In short, do you know what an Impact Factor actually is?

 

If not, don’t worry. Consider this your Impact Factor 101.

An Impact Factor (IF) is, in essence, a fairly simple sum. A journal’s 2021 IF is calculated using citations received by that journal in 2021 for articles published in the previous two years (2019 + 2020), divided by the number of articles the journal published in those two years.

So, if in 2021 a journal received 13 citations to articles published in 2019 and 17 citations to articles published in 2020 (a total of 30) and published 35 articles over those two years; it would have a 2021 IF of 0.857.

The reason that only the previous two years are taken into consideration is simply to level the playing field. If you took into consideration citations to articles from a journal’s full history, then it would be extremely biased towards older journals who naturally have a far larger number of articles to choose from.

At the end of each year, the citations for each journal are counted and put into the Journal Citation Reports (JCR) which are published the following summer.

 

Which articles count towards an IF?

You have written an article published in a journal which is included in the JCR and, let’s say, it’s been published this year. Any citations your article receives next year or the year after will contribute towards the journal’s IF for those years.

Not everything published by a journal counts as “Article Content”, only Research Articles, Review Articles, Short Reports, etc. Editorials, Book Reviews, Letters to the Editor, etc, are not classed as Article Content so won’t be counted in the number of articles published. Any citations they receive the two years following publication will, however, be included in the citation count.

 

Alright, so what is a “good” or “high” Impact Factor?

The simple answer to that is the bigger the IF (i.e. the higher the number), the better it is. But, of course, it’s not quite as simple as that.

Different disciplines are likely to receive different levels of citations. For example, a scientific journal publishing up to the minute research in a fast-moving field is likely to receive more citations within the “IF window” (those two years after an article is published) than a humanities journal in a field which moves somewhat slower. An IF of 1.000 might be brilliant in one discipline, but be pretty poor in another.

To allow for this, journals are split into categories within the JCR and ranked within those. Therefore, rather than looking at their IF alone, to find the best journals within your field you should find the most relevant categories and see which journals rank highest within those.

For more information on the Journal Citation Reports (and Impact Factors, naturally), check out Clarivate’s website here.

Publishing Roles

When following your manuscript through from submission to acceptance, there are many different people and several different teams with whom you will come into contact. This can be confusing, to say the least!

So just who does what at each stage and, more importantly, who on earth are you supposed to go to if you have a question?!

 

The Managing Editor

That’s us – hello!

Sometimes referred to as an “Editorial Assistant” or “Journal Administrator”, the Managing Editor oversees the smooth running of the peer-review process. Our expertise is in the peer review-process itself, rather than the subject matter of the journal; we are the submission system’s “super users”, if you like. We keep an eye on everything to make sure that peer review runs smoothly and chase up anybody who needs it – authors, reviewers, even the editors sometimes! – allowing the academic editors to focus on the research.

You will hear from us every time you need to do something e.g., make some corrections, submit a form, or remember that you’ve got a revision deadline coming up…

The Managing Editor is your main point of contact for the journal during peer review, so anytime you have a question, it’s us you should email. Even if we’re not able to help you personally, we will know who to direct your query to.

 

The Editor-in-Chief

The Editor-in-Chief (EiC) is, as you would expect, the person in charge of the journal. He or she will be an expert with a broad overview of the journal’s field and will decide what content goes into the journal, how the peer-review process is run, and, to an extent, how the published content appears. How hands-on the EiC is differs journal to journal and EiC to EiC, but generally they will be the person making the final decision based on the recommendation of the reviewers and Associate Editors.

For most journals, it is best to get in touch with the Managing Editor and ask them to pass your comment or query on to the Editor-in-Chief rather than contacting them directly.

 

The Associate Editors

Mid-to-large journals tend to have a team of editors, rather than just one who deals with every submission personally.

There are many, many names for Associate Editors (on some journals they are even known as “Managing Editors”, just to confuse everybody) but they are the academic experts who aid the EiC by giving him or her their expert opinion and selecting reviewers for articles within their specialism.

A good editorial team of will have all of the niche subjects within the journal’s scope covered between them so that every manuscript submitted will have an expert eye cast over it, even if it’s slightly out of the EiC’s personal specialism.

How much the Associate Editors are able to assist with enquiries again varies, so The Managing Editor should still be your first port of call.