"Frankfurt Book Fair 2024"
Blog
Blogues

Scorecards as a Method to Tackle Submission Overload

Caption

Information is easy to think of all-at-once, as though it were a single fluid somewhere on the internet. But when we start thinking about its materiality, we are forced to consider how it is processed in discrete quantities through multiple nodes. For publishing specifically, a feature that is simultaneously obvious and somehow under-appreciated is that the massive amounts of academic output we make use of depends on the labour of actual editors. This involves having to sift through submissions and make calls about whether to reject them, who to request reviews from, decide how to react to the reviews received, and make a final judgement on whether to reject, accept, or recommend re-submission.

This dependence on human editors with limited time means they act as gatekeepers to which manuscripts get the green light and which remain locked away in private drawers. One academic philosopher calculates that even if we make the conservative estimate of a steady number of 10,000 papers submitted every year, this dwarfs the 2,000 or so number of spaces available for publishing. This will mean 8,000 unaccepted in the first year which scholars try to publish the next year too, which means 18,000 submissions competing for 2,000 slots. And then 26,000, and then 34,00. A staggering number of submissions will have to be dealt with.

What’s worse, the calculation above assumed that there was a fixed number of submissions every year, and we know this isn’t true — as we’ve written before, an estimate from Lutz Bornmann and Ruediger Mutz in their 2014 paper Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references, there seems to be an increase in overall submissions of 8–9 per cent every year.

Editors cannot look at more than one submission at a time, no matter how much they wish they could. Delays are to be expected, but if submissions are made during the delay itself, then this hardly solves the problem. I’m sure editors use a number of strategies to try to deal with this problem, but I suspect that a fairly common outcome (intentional or otherwise) is differential attention paid to articles based on whether the editor knows the author or topic, whether the writing style is sophisticated, etc. In other words, there is already bound to be heuristics and rules of thumb to sift through submissions made. This isn’t meant to be criticism of editors, but an acknowledgement that our inability to process large amounts of information simultaneously means that we need methods to order information in processable ways. This is a perfect place for introducing AI.

Acknowledging that editors already have a variety of preferences means seeing that they are quite likely differ systematically with disciplines and idiosyncratically with personal taste. The system offered to score submissions will not be one that simply scores every paper according to pre-set metric, but can involve multiple customizable factors that include number of previous author submissions and the number of times the previous works were cited, relevance of the title and key terms to the discipline, the similarity of the topics discussed with articles previously published in that particular journal, etc. And the specific weight each of these factors should get in the score can also be set.

At first glance, this might seem like too coarse-grained a tool because we can think of all kinds of papers we might like which might have been ranked low by some of these metrics. For example, new academics will be at a disadvantage if previous citations are taken into account, work that breaks new ground will be set back because its topics might not match existing trends, and non-prosaic titles may suffer if they lose out in favour of titles which are more to-the-point (consider how the Historian Simon Schaffer, for example, has a paper on ship design hilariously titled “Fish and Ships”). These are real and serious concerns.

But there are three reasons I still think scorecards should be adopted anyway, First, as I’ve tried to emphasize, many of these tests are already being used by editors now. For example, submissions by celebrated academics are treated vastly differently compared to unknown grad students. This system just makes this explicit, and so holding the system to a higher standard to human editors seems unfair. Second, this making explicit of standards can force academics to coordinate publicly about what exactly they will look out for in submissions, possibly even making the entire process more transparent instead of the black box it so often is. Third, as submissions increase, editors already are going to have to choose where to focus attention. The question is whether they choose to opt for a procedure of looking at submissions in order of submission or randomly, or according to some specifiable metric.

It has to be remembered that this is only a sorting mechanism to decide the order in which the article should be read, and not intended to judge the quality of the article itself. There are still many questions and issues to address, but understood in this manner, it seems like a potentially vital tool to help deal with submission increases and regain some control.

Gain better visibility and control over your entire processes.
Retain control over your content; archive and retrieve at will.
Achieve a 20% cost-saving with our AI-based publishing solution.