On twitter I floated this partial solution to the problem of predatory journals,
which I’ll add to here.
If you’ve been in a field for a couple years, then you’re familiar with the journals that most of its research is published in. If you came across a journal that was new to you, you’d probably scrutinise its content and its editorial board before publishing in it, and you’d probably notice if something were a bit dodgy about that journal.
But many users of scientific research do not have much familiarity with journals of particular specialties and their mores. Sadly, this includes some of the administrators that make decisions about the careers and grants of scholars. It also includes many in countries without a long tradition of being fully integrated with international scholarship, who are now trying to rapidly join the community of scholars published in English. Journalists, medical professionals, wikipedia authors, and policymakers may not have the experience to distinguish good journals from illegitimate ones.
Unfortunately, there is no one-stop shop that scholars, administrators, journalists, or policymakers can consult for an indication of how legitimate a journal is. Predatory journals are common, charging researchers hundreds to publish an article with little to no vetting by reviewers and shoddy publishing service. Their victims may predominantly come from countries trying to jump into international publishing in English for the first time, some of whom receive monetary rewards from their universities for doing so. There are proprietary journal databases like journal citation reports of Thomson Reuters, but they cost money and can take years to index new journals. Jeffrey Beall used to maintain an (arguably biased) free list of predatory journals, but for various reasons including legal ones (see 1, 2) blacklists are probably a bad idea.
What follows is an automated way to create a list of legitimate journals, in other words a whitelist for people to consult. It can’t be fully automated until the scholarly community institutes a few changes, but these are changes that arguably are also needed for other reasons.
Non-predatory, respected journals nearly universally have an editorial board of scholars who have published a significant amount of research in other respected journals. The whitelist would need to establish whether those scholars exist and have published in (other) reputable journals.
Journals have rapidly taken up the ORCiD system of unique researcher identities, asking authors who submit papers to enter their ORCid number. They should also do this for their editorial board members – journals should add ORCiD numbers to their editorial board list.
An organization (such as SHERPA, that maintains the SHERPA/ROMEO list of journals and their open access policies) could then pull the editors’ publication lists from ORCiD and create a score, with a threshold for the score indicating that a goodly proportion of the editors have published in other reputable journals. To get this started, existing whitelists of legitimate journals would be used to make sure the journals the editorial board members published in were legitimate.
A new index of journal quality
The score could also be used as a new indicator of the esteem of journals – if the journal has only highly-cited researchers on its editorial board, it is probably a prestigious journal (science badly needs new indicators of quality, however flawed, to reduce reliance on citation metrics like impact factor). Journals could thus be ranked by the scholarly impact of its editorial board members. This would allow new journals to immediately have prestige without having to wait the years necessary to establish a strong citation record.
Currently, the reliance on impact factor and its long time lag imposes a high barrier to entry, preventing innovative publishers and journals from competing with older journals. This is also a key obstacle for getting editorial boards to decamp from publisher-owned subscription titles and create a new open access journals because, when new, the journal has no citation metrics.
A remaining difficulty is that some predatory journal webpages list names of researchers on their editorial board who never agreed to be listed. If ORCiD would add a field for researchers’ digital signature public key, and researchers started using digital signatures, then journals could on their webpage (and even in their article PDFs) include a signed message from each editor certifying that they agreed to be on the editorial board.