You may be familiar with the term ‘predatory journals’ or ‘predatory publishing’ or received exuberant emails soliciting articles with a sense of keen urgency. In this section, we address this phenomena, explain why it is a problem and why you should avoid submitting to inappropriate journals.
An increasing body of research and grey literature aims to provide reliable methods for identifying problematic journals, most famously the controversial, and now closed, Beall’s List. More recent advances include a scoping review published in 2017 by Kelly Cobey with members of the Centre for Journalology, and in early 2019, a thorough discussion of the phenomenon on The Open Scholarship Initiative website by Rick Anderson. The Think.Check.Submit initiative has also been very successful in raising awareness and providing an introductory guide to identifying problematic journals.
But as tools to spot predatory journals become more advanced, so too does the presentation and behaviour style of these journals become more sophisticated.
Some common features of deceptive journals may not be enough on their own to properly identify a journal as problematic, for example, a poorly designed website with low-quality images does not look good, but is not a problem by itself. Likewise, most importantly, a journal charging a fee to authors is not an indication of predatory behaviour. Therefore, we must look at as much information around the journal as possible to best decide if it is legitimate.
We aim to help you understand this phenomena and ensure you submit to the most suitable journals.
There are several different terms used to describe these journals and publishing operations, and different ways in which the key problems with them can manifest:
The defining ‘predatory’ features of these journals are that they market themselves with false accolades and credentials, attempting to present themselves as prestigious journals. These journals are designed to take advantage of the high demands of researchers, typically young doctorial students, to publish articles towards PhD and post-graduate assessments.
The benefit to these journals is, of course, financial. They abuse the Article Processing Charge (APC)-based open access model, publishing high volumes of papers without editorial scrutiny at low costs to authors, providing an easily accessible, venue for publication that guarantees acceptance, to meet the demand.
The very real problem of this behaviour is that these operations undermine the integrity of open access publishing as a whole, leaving many researchers, new and experienced, under the impression that all open access journals publish low quality, unreliable, or unethical research with a lack of editorial and infrastructural rigour. This research is then available in the academic ecosystem, where it can be cited, and work its way through subsequent articles to inform more research. As the phenomena of such deceptive journals has prevailed for some years now, the effects of this situation are being felt, and it is currently the topic of much debate among academic and publishing communities.
This situation gives rise to some important considerations we must extend to new journals founded on genuine scientific goals, from financially and technically developing areas and organisations. The publishing industry, and authors, must be careful not to assume a journal is maliciously deceptive, rather than being inexperienced or lacking resources and infrastructure.
There are some of the most common themes of deception and misinformation which characterise these journals. In isolation, any of these individual features may not be sufficient cause for concern, but several together may begin to raise warning flags for you and you should probably investigate more closely.
Claims of a thorough peer review process are made, but there is no evidence that any peer review is carried out. There is no evidence of selectivity or screening based on editorial or quality checks as all submissions appear to be accepted.
As well as advertising peer review, these journals often offer very rapid peer review times of under a week, or rapid peer review service where additional payments can be made for a faster decision.
Similar to the co-opting of individuals, predatory journals may also state or imply they are affiliated with prestigious institutions and organisations, without their knowledge or consent.
Predatory journals often attempt to give the impression they are indexed in the key journal indexes Web of Science and Scopus, through outright false claims or by deceptive means.
In addition to predatory journals, there is a concurrent industry of predatory metrics, indexing databases, and conferences designed to capitalise on similar opportunities as the predatory journals.
Predatory databases share names very similar to the official mainstream sites such as Index Scientific Journals, which can use the acronym ISI to match the real ISI Web of Science. Therefore, a journal can claim to be listed in ISI, but refer to the pay-per-listing site, rather than the real site (which is free but has strict inclusion criteria).
These databases likely use Google Scholar to draw their citation metrics from – if they base them on anything at all. Their sources and methods are not usually explained or transparent. Inclusion in these databases is granted in exchange for a fee, with an ‘Impact Factor’ provided for an additional fee, so there is no barrier to inclusion other than a fee.
Some journals use these databases to list themselves and present as being indexed in prestigious, valuable scientific databases.
Other forms of deception and misdirection, or naivety, that these journals exhibit is by using databases inappropriately. For example, presenting sites such as ResearchGate and Mendeley as indexing databases, or claiming Thomson Reuter Researcher IDs, Scopus Researcher IDs and ORCiD accounts in the name of a journal.
The deceptive practices around metrics follow a similar theme to index databases. Through receiving metric awards from predatory databases, journals can make claims of having high ‘Impact Factors’.
It is important to note that there is only one legitimate ‘Impact Factor’ - the Journal Impact Factor (JIF), awarded to journals indexed in the Web of Science, owned by Clarivate Analytics. Find out more about legitimate journal metrics and databases.
Deceptive journals may also have very high self-citation rates to inflate their Google Scholar citation metrics. There are also journals indexed in Scopus and Web of Science that succeed in being indexed with relatively low citation counts, then display large increases in citations, all powered through self-citations. These are visible in the Scopus SJR database.
The locations or offices of deceptive journals or publishers are quite often not mentioned anywhere on a website. If they are mentioned, the address is usually in the United States, or UK. However, the Editorial Boards and author locations of the individuals listed in the journal site indicate other locations. In some cases, where postcodes/zipcodes are provided, these can be put into Google maps and searched and can yield surprising or amusing results.
At IFIS, we have developed detailed and strict criteria for assessing the journals we add to our FSTA database, to ensure that you can find the most reputable research suitable for your needs.
Drawing from all the points detailed in this guide, our screening uses a 60-point checklist covering 17 key areas. Investigating a journal can be a significant undertaking, and we spend time carefully investigating and cross-checking journal details so you can trust every title in our database, and use them as benchmark for your own assessments of new journals you may encounter.
Our main checklist of predatory characteristics comprises a list of criteria on which we rank a journal as being severe, moderate or acceptable. We make a decision on whether to include each journal assessed based on its individual performance against all of these criteria, taking into account the nature and severity of any points the journal does not perform well for.
For any journals found to be of a borderline status, we also have a list of more finely-detailed criteria which we can also check to help us come to a decision about whether to cover the journal. We would only use this list for those journals we are unsure about, and for most journals we would hope to be able to come to a decision using only the criteria on the main checklist.