Radical transparency can fix bad behaviour by academic journals

Revealing desk rejection rates, peer review processing times and other useful operational data would do more to correct slipshod journal practices than an ‘author’s bill of rights’, says Jerry Jacobs

November 6, 2022
Source: istock

Terse rejections from journal editors are seldom appreciated but often it is the vague or wrong-headed reasons provided that most infuriate academics.

To this end, the recent opinion piece by Harvey Graff about his frustrated dealings with peer-reviewed periodicals may strike a chord with some researchers. Unresponsive editors, unprofessional reviews and high-handed refusals to offer any reasons for rejection do not reflect well on journals and his case for an “author’s bill of rights” to combat such problems has its merits.

But his complaints ignore the central role of journal publishers whose data systems could offer better remedies than a voluntary and unenforceable pledge. Having researched this topic, I know there are cases of journal editors who fail to maintain high professional and ethical standards, and are largely unaccountable to authors or anyone else. But it is journal publishers who are the key players in the publication system.

In my own field of sociology, academic publishing has been largely consolidated into four companies – SAGE, Springer, Taylor & Francis and Wiley – which publish two-thirds of the 148 journals listed in Web of Science Journal Citation Reports. But while their manuscript submission software is increasingly sophisticated, the information it produces is not shared as widely as it should be.

ADVERTISEMENT

Some of this information is currently accessible. Authors who submit their papers for peer review are able to check on the status of the review process, while readers can peruse statistics on views and downloads of published papers. But much more can be done with these data to make the system transparent for prospective authors. 

Most importantly, journal websites should post real-time statistics on decision times and acceptance rates. This type of information is sought out by most authors, and is often of particular interest to graduate students and assistant professors whose very careers depend on successfully navigating the journal review process in a timely way. Potential delays matter particularly for those are already marginalised in a variety of ways, perhaps by race, ethnicity, gender, departmental status or location outside the country where the journal is edited.

ADVERTISEMENT

In other words, authors who are based at major research institutions are likely to have “insider information” on editors’ preferences, review times and other useful information, while those who lack access to information channels are disadvantaged by the current opaque arrangements. Transparency can thus contribute to equity as well as efficiency in the review system. 

Some publishers – such as Taylor & Francis – now regularly post decision time and acceptance rate data prominently on their journals’ webpages, while Springer posts time to first decision. SAGE provides a wide range of submission data for the journals it publishes on behalf of the American Sociological Association as well as for the journal Gender & Society, published by SAGE for Sociologists for Women in Society. At present, approximately one-third of sociology journals have taken at least some initial steps toward making their journal websites more transparent to authors. In other words, we are well past the “proof of concept” stage. There are indications that the vast majority of sociology journals are on their way to providing prospective authors with these key indicators.

Unfortunately, many metrics are subject to some degree of manipulation. For example, a focus on time to first decision might lead some editors to limit the number of papers that are sent out for a full review. But the solution is more data, not less; publishers should provide data on the number of papers that are rejected without review, and time-to-decision data on papers that are sent to reviewers. Other indicators, including time from initial submission to final publication for accepted papers, will help authors see clearly how long the publication cycle is likely to take.

Graff’s concern is editorial malfeasance. The journal managers I have interviewed maintain that there are grievance procedures in place that serve as checks on editors’ actions. These grievance mechanisms, however, are often not clearly posted on most journal websites. Why not make it easy for authors and reviewers to question problematic editorial experiences? Doing so would follow the lead of many commercial organisations that routinely ask customers and clients to review the quality of their service. This type of arrangement would provide a clearer indication of the prevalence of Graff’s experiences and, more importantly, would provide a signal to specific journals and publishers that practices need to improve.

ADVERTISEMENT

As I see it, we should move toward a “stakeholder” model of journal websites. The journal submission platforms should not give publishers exclusive ownership of data on the review process.

Rather, input from a variety of stakeholders could be obtained in various ways – conferences, surveys, website-based feedback mechanisms and so on. An author’s “bill of rights” could become part of such a stakeholder system. Any set of principles designed to protect authors will depend on the transparency of the database systems that the publishers currently control.

Will journal transparency improve review times? I am not sure. Peer review varies by discipline, and even by specialty within fields. The supply of reviewers is always limited. My expectation is that the ready availability of comparative data will put pressure on journals that are outliers to move closer to the norm in their field. The case for sharing information with authors, however, is a compelling one that does not depend upon speculation about the long-term evolution of journal practices.

It is difficult to separate the review process from the system of research grants, careers and career recognition. While transparent websites will not resolve all the challenges faced by academic researchers, publishers can nevertheless help enhance the transparency of their websites in ways that will improve the journal review process for authors, editors and reviewers.

ADVERTISEMENT

Jerry Jacobs is professor of sociology at the University of Pennsylvania.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (4)

Blind review needs abolishing so that reviewers are forced to be professional and know that vile and unfounded comments will require them to put their names to them. BMJ have non blind review which has resulted in authors knowing the qualifications and name of those blocking their papers and has led to reviewers being professional and sticking to real substantive points.
Blind review needs abolishing so that reviewers are forced to be professional and know that vile and unfounded comments will require them to put their names to them. BMJ have non blind review which has resulted in authors knowing the qualifications and name of those blocking their papers and has led to reviewers being professional and sticking to real substantive points.
Blind review needs abolishing so that reviewers are forced to be professional and know that vile and unfounded comments will require them to put their names to them. BMJ have non blind review which has resulted in authors knowing the qualifications and name of those blocking their papers and has led to reviewers being professional and sticking to real substantive points.
Blind review needs abolishing so that reviewers are forced to be professional and know that vile and unfounded comments will require them to put their names to them. BMJ have non blind review which has resulted in authors knowing the qualifications and name of those blocking their papers and has led to reviewers being professional and sticking to real substantive points.

Sponsored

ADVERTISEMENT