header
vol. 22 no. 4, December, 2017


Proceedings of RAILS - Research Applications, Information and Library Studies, 2016: School of Information Management, Victoria University of Wellington, New Zealand, 6-8 December, 2016.


After Beall’s ‘List of predatory publishers’: problems with the list and paths forward


Stuart Yeates


Introduction. Jeffrey Beall’s now-defunct List of Predatory Publishers and the American Library Association’s Freedom to Read Statement are introduced. Uses of Beall’s List are examined in the context of the ALA Statement and academic libraries. Other issues with the List are examined and alternatives are reviewed.
Method. Tensions between the List and the Statement were identified primarily by close reading and through relating these to the role of academics, as readers and writers of academic journals and as faculty decision makers.
Analysis. Beall’s descriptions of journals are analysed in the light of his claimed goals and criteria.
Results. ‘Predatory’ is found to be a ‘subversive or dangerous’ label under Proposition 5 of the ALA Statement. Tension is found to exist between Beall’s List and the statement when used anywhere in faculty-run academic institutions. A number of approaches to avoid these tensions are discussed, including other approaches to determine the quality of an open access journal.
Conclusion. Malicious academic publishers undoubtedly exist, but Beall’s List is not an ethical solution to the problem of their existence. A replacement should be built from the ground up, rather than starting from the List. A possible checklist and associated metric are provided.

Introduction

The transition in scholarly dissemination from a traditional print-based peer-review context to an all-digital open access peer-review context has been disruptive on a number of levels. New publishers have emerged with stables of journals of largely-unknown quality, some of whom appear to have dubious business practices. Existing publishers have hugely expanded their offerings and output or completely changed business models. This has led to increasing interest in measures for the quality of peer review, such as citation counts, journal lists and so forth. Using the lens of the ALA’s Freedom to Read Statement, I examine the ethics of a particular measure of the quality of peer review, known as Beall’s List. (Note: This is a revised version of a paper presented RAILS 2016 (6-8 December 2016), which has been updated to reflect recent events).

Publishers with questionable business practices

One result of the pressure to publish in higher education, and the large number of institutions attempting to raise their research profile is a large pool of academics (or would-be academics) searching for journals to publish in. A wide variety of new or altered journals service this demand, ranging from well-established academic journals splitting into multiple series or expanding their throughput of articles, to entirely new journals and journal publishers. A common trend in new journals is pure-digital journals using low-cost cloud-based Internet hosting and running free-to-use software such as Open Journal Systems or Wordpress.

Many of these new journals undoubtedly do (or make best-effort attempts to do) all the things that librarians and academics expect academic journals to do, including connecting with a community of peers, conducting peer review within that community, copy editing, publishing of accepted papers, promoting papers to indexing and search services, and dealing in a transparent manner with criticism (of articles or the journal as a whole) as it arises.

However, not all journals are what they appear to be, raising concerns about the quality of some journals, including whether they are scholarly and ethical, or merely commercial. At the extreme end of potentially unethical behaviour are journals which are little more than a new breed of vanity press, which take the author’s article processing charges (APCs) and publish the paper without any pretence of reading it or evaluating its quality through peer review and with no meaningful response to criticism (Berger and Cirasella, 2015). In between lie journals with a range of issues including plagiarism, technical competence with publishing, no apparent peer group from which to draw reviewers, publishing in English as a second language, competence with peer reviewing, and financial issues.

Beall’s list

Jeffrey Beall, a Denver-based librarian, has extensively studied open access journals that claim to do peer review but on closer examination do not. The term ‘predatory publishers’ was coined by Beall (Bloudoff-Indelicato, 2015), and it and related terms have been widely used in the academic world (Cartlidge, 2017; University of Tennessee Office of Research & Engagement, 2017), the popular press (Carey, 2016; Mascarenhas, 2017) and by government sources (Federal Trade Commission, 2016). Until early 2017, Beall published regular reviews of journals on his blog and compiled a comprehensive list, known widely as Beall’s (2016h) List of Predatory Publishers and in full as Potential, Possible, or Probable Predatory Scholarly Open-Access Publishers. He also listed journals: Potential, Possible, or Probable Predatory Scholarly Open-Access Journals, Hijacked Journals, and Misleading Metrics. Through the comments section of his blog Beall accepted feedback and nominations of further journals to review. Beall has also published extensively in traditional journals on the topic, and in so doing almost single-handedly brought this issue to the forefront of thinking in academic publishing and academic librarianship. The List and his reviews are very widely referred to, with Google giving many thousands of incoming links.

Beall (2015a) published a list of criteria for assessing open access journals, based on two documents by the Committee on Publication Ethics (COPE; n.d.a, n.d.b): the Code of Conduct for Journal Publishers and the Principles of Transparency and Best Practice in Scholarly Publishing. No direct mapping of individual criteria to specific parts of the code or principles was given by Beall.

Beall’s List and his work in general have generated significant negative feedback from the publishers of some of the journals he has reviewed; there is at least one Website whose only purpose appears to be to attack him (Anonymous, 2017) and vitriol has been posted to the comments section of his Website. A number of people have criticised Beall for racism (e.g., Houghton, 2017; Segev, 2016; Velterop, 2015), an issue that has been (Peterson, 1996) and continues to be a problem in librarianship more generally (Hall, 2012). First-hand reports suggest that Beall was open to arguments for removing journals from the list (several academics have mentioned their experiences of this to me since I presented this paper in December 2016) as well as adding new titles.

Beall’s List has been used to quality-check other lists of journals, for example, the ABDC Journal Quality List, used for national research quality evaluations in Australia, removed a number of journals in 2016 based solely on their presence on Beall’s List (Australian Business Deans Council, 2016).

Is there a tension?

Proposition 5 of the ALA statement is ‘It is not in the public interest to force a reader to accept the prejudgment of a label characterizing any expression or its author as subversive or dangerous [emphasis added]. This proposition lies at the heart of the current analysis of Beall’s List.

Beall’s List is only in tension with the Statement if three things hold true: (a) some of those who use, or are exposed to, the List are readers; (b) the List labels expressions or authors; and (c) ‘predatory’ is a label that characterises something as subversive or dangerous. I examine each of these separately.

  1. Straightforward uses for the List include collection development purposes (Grabowshy, 2015) or informing academics’ choice of publication venue, both common activities in academic librarianship in which there is no obvious reader. But these activities are commonly performed by or under the scrutiny of academics, who are by definition both readers and writers of the academic output. There are indeed no functions of an academic library which are likely to be safe from the scrutiny of academics and thus no space where scrutinising academics (i.e., readers) might not see the label applied.
  2. The primary list is a list of publishers and reviews of publishers, however, closer examination reveals that publishers are listed as a convenient shorthand for listing all of the journals published by the publisher. For example:
  3. Its 23 journals all boast fake impact factors, a tactic designed to attract article submissions from researchers needing to publish in impact factor journals. … Some of the journals appear to also be published by the predatory publisher Associated Asia Research Foundation, also on my list. So the owners are recycling some journals from another predatory publisher they own. (Beall, 2016d)

    Further, sometimes the actions of individual authors are examined when deciding whether to list publishers or journals:

    The scandal-plagued, Switzerland-based publisher Frontiers has just published a chemtrails conspiracy theory paper by the same author whose earlier article was published and then retracted in an MDPI journal. ... In August, 2015, I reported that J. Marvin Herndon had published a conspiracy theory paper in the MDPI journal International Journal of Environmental Research and Public Health. After my blog post was published, MDPI quickly retracted the article. (Beall, 2016c)

    Thus, it seems clear that the List de facto lists both expressions and authors, directly or indirectly.

  4. Predatory does not necessarily connote something ‘subversive or dangerous’. The Oxford English Dictionary gives four meanings: ‘involving plunder, pillage, or ruthless exploitation’; ‘harmful to health’, ‘relating to predatory animals’, and ‘of business or financial practices: unfairly competitive or exploitative’. The Merriam-Webster Dictionary covers very similar ground, giving ‘The inclined or intended to injure or exploit others for personal gain or profit’. Since there is very little on the Beall’s Website about predatory animals and the other definitions are all either subversive or dangerous, Beall’s use of the predatory label does characterise something as subversive or dangerous.

Combined, these three things mean there is indeed likely to be tension between the List and the ALA statement when used in institutions where actively-publishing academic authors wield significant managerial influence.

Other issues with the List

There are a number of other issues with the List, some of which have been touched on elsewhere, but are summarised here for the sake of completeness:

(1) False positives

Of the criteria Beall uses, one eclipses all others, ‘Evidence exists showing that the publisher does not really conduct a bona fide peer review’ (Beall, 2015a). The other criteria seem largely irrelevant for potential users of the List, except as indirect indicators of a lack of peer review. The other criteria do, however, have a very large number of false positives, incorrectly suggesting, for example, that all manner of legitimate journals are questionable. These include all student-run journals (e.g., Harvard Educational Review, Cornell International Affairs Review), all in-house journals (e.g., IBM Journal of Research and Development), all journals whose editors whose use gmail.com addresses, and so forth. Despite not meeting some ideal of what an academic journal might be, these classes of journals play significant and valuable roles in education, in research and in the dissemination of research. While none of these example journals are on the List, they certainly meet the criteria. About the only possible virtue of discounting such journals is to eliminate discipline-specific publishing forms in pursuit of academic uniformity and excellence, which has been shown elsewhere to ultimately a fruitless exercise (Moore, Cameron, Eve, O’Donnell and Pattinson, 2017).

(2) Presumption of innocence

The List condemns all journals that have flaws, and makes no attempt to separate problems resulting from deliberate deception from those resulting from incompetence (in the language of communication; in peer review; in the business aspects of publishing; in the technical aspects of online publishing; some combination of these). While in some cases the evidence of deception is pretty strong, the presumption of innocence is a very important legal and ethical principle, too important to be removed in such a high-handed fashion.

(3) Expedited review

Beall uses expedited review (where journals offer fast peer review turnaround) as an indicator that a journal doesn’t actually conduct peer review as it should (Beall, 2013a, 2013b). The Committee on Publication Ethics documents don’t mention this, but encourage publishers to ‘Publish content on a timely basis’ which would seem to encourage fast turnaround rather than the reverse. On this point Beall appears to be in direct contradiction with the documents, which are his stated criteria.

(4) International dialects of English

The Websites of a few of the journals on the List, and many of their papers, appear to be written in international dialects of English. This, presumably reflects a population of new academics in (or from) non-Western countries. Beall has used poor or broken English as an indicator of quality several times in his reviews (Beall, 2012a, 2012b) with no apparent consideration as to whether the usage is acceptable in the national dialect in use. The only relevant passage in the two Committee on Publication Ethics documents that Beall uses as authority refers to all text on a journal's Website being written with due care to meet professional standards; there is no requirement explicit or implicit that Websites need to be in any particular international dialect of English.

(5) Languages other than English

Items primarily in languages other than English such as Arab Impact Factor (wholly in Arabic) and Index Copernicus (primarily in Polish, with some English and Russian) feature on the Lists. Beall shows no indication of speaking the primary languages of these items and no sign of consulting a suitably-qualified speaker of the relevant language, raising serious questions about his ability to evaluate them.

(6) Locality reputation

The reputation of the countries and cities in which journals are based is not among the criteria, but Beall not infrequently makes observations such as:

It’s Thanksgiving Day in the United States, a fitting time to examine a selection of predatory publishers from Turkey. Higher education in Turkey is currently in crisis, with numerous academics fired from their posts and institutions ordered closed. Academic integrity has been declining in Turkey for years, with plagiarism (including many plagiarized dissertations) and the widespread use of predatory journals for academic advancement and promotions common. (Beall, 2016g)
Hyderabad, India is one of the most corrupt cities on earth, I think. It is home to countless predatory open-access publishers and conference organizers, and new, open-access publishing companies and brands are being created there every day. All institutions of higher education, all funders, governments, and researchers should be especially wary of any business based in Hyderabad. The tacit rule of thumb of Hyderabad-based businesses is: Use the Internet to generate revenue any way you can. (Beall, 2016f)
When it launched, MedCrave reported its headquarters location as Bartlesville, Oklahoma. Now it claims Edmond, Oklahoma is its home. This is a lie, as the publisher is really run out of Hyderabad, India, the home of many corrupt online businesses, including predatory publishers. (Beall, 2016e)

This is not the language of an impartial evaluation and completely unrelated to any of the given criteria. Predominantly white (or Caucasian) regions with well-established academic publishing issues appear to be spared this broad-brush characterisation when evaluating journals. For example, there are no such observations in any of the coverage of journals from the Russia or the former Soviet Union (Beall, 2015b, 2016a, 2016b) despite serious issues in higher education publishing with plagiarism, as documented in some depth in the Russian-language Dissernet), and copyright infringement, being home to the flag-ship copyright infringer Sci-Hub (Kemsley, 2017; Scheman, 2017). (As of November 2017, Sci-Hub was not active following an American Chemical Society lawsuit (Chawla, 2017).)

Taken together the last three issues would be seem to be easy to characterise as racism, xenophobia or colonialism, depending upon the lens one uses.

Beall’s response

Beall (2017) has acknowledged criticism of his work and summarises it like this:

Over the five years I tracked and listed predatory publishers and journals, those who attacked me the most were other academic librarians. The attacks were often personal and unrelated to the ideas I was sharing or to the discoveries I was making about predatory publishers.

I have attempted to avoid personal attacks, I have acknowledged Beall’s very large role in bringing the unethical behaviour of some publishers to a wide audience, and I have tried to keep my analysis on the text, if not the ideas, that Beall has published.

The way forward

In order to measure something reliably, we need to know what it is, exactly, that we’re trying to measure. Now that Beall has done an excellent job of bringing questionable academic publishing practises to the fore it makes sense to attempt define how we might measure questionable academic publishing practices.

The Committee on Publication Ethics documents that Beall relied on for his criteria come from a publishing background and are effectively a checklist of thing policies and statements that a publisher can have on its Website; for example:

15. Archiving: A journal’s plan for electronic backup and preservation of access to the journal content (for example, access to main articles via CLOCKSS or PubMedCentral) in the event a journal is no longer published shall be clearly indicated. (Committee on Publication Ethics, n.d.b)

A better approach would use a checklist that required verification, maybe in the fashion of the Think Check Submit. For example, a better wording might read:

Archiving: A journal’s plan for electronic backup and preservation of access to the journal content in the event a journal is no longer published shall be indicated, and easily verifiable by third parties (for example the publisher or journal’s presence on the CLOCKSS participating publishers list, PubMedCentral’s journal list or the Internet Archive’s live repository)

While it would require no more work for publishers, this trust, but verify approach would allow a third party metric to be built with confidence on the checklist and independently validated. The Committee on Publication Ethics (n.d.a, n.d.b)documents also contain an unfortunate number of technology-specific references which are likely to cause problems with technically innovative journals (‘...both HTML and PDFs’; implicit assumption that journals have a singular Website, etc.), but these are relatively straightforward to fix.

Considering other interests

A more serious issue with the Committee on Publication Ethics approach is that it is grounded in the publisher’s point of view and ignores other groups with interests (or potential interests) in peer review and publication quality. These include authors, reviewers, funding agencies, university chancellors, librarians, and readers, who do not appear to have any input into the current documents. Issues these groups might raise include: inclusion of COUNTER or Standardized Usage Statistics Harvesting Initiative (SUSHI) statistics reporting; standardised reporting of a range of journal metrics; open access mandates; ethics approval statements; a stronger archiving position; standardised article withdraw notification; improved metadata (at the journal or article level); authority control (maybe Open Researcher and Contributor ID (ORCID) or Virtual International Authority File (VIAF)); improved transparency; author, reviewer, editor and reader demographic reporting; and affirmative action reporting.

Building a metric

Once a suitable checklist has been compiled, the next step is building a metric on the results. There are many ways to build a metric, depending on the checklist, but a list of priorities might look like this:

  1. Avoiding pejorative naming.
  2. Explicit lists of things to be encouraged or discouraged and clear mapping from these to the metric.
  3. A comprehensive authority scheme to identify parties (publishers, authors, reviewers, editors, etc.) such as The Virtual International Authority File (VIAF) or Open Researcher and Contributor ID (ORCID).
  4. A scalable, transparent, mechanistic process rather than human-based opaque interpretive process.
  5. An open world assumption rather than a closed world assumption to better handle missing and ambiguous data.
  6. A ranking or scale (maybe deciles or ventiles) to avoid bright-line good/bad distinctions, both because they encourage arguments of where the line should be and because they discourage further improvement once crossed.
  7. Design input from game-theory experts to reduce the chances of parties exploiting the metric for unforeseen commercial or personal gain.
  8. Design input from a broad range of stakeholders in the academic publishing industry.
  9. Versioning to enable updates should flaws be found and rectified.

An interesting possibility is raised by Publons, which is a partially-de-anonymised database of peer reviewers built on the ORCID authority control scheme. Marketed as a technique for reviewers to get credit, such a system could also be used to verify claims made by journals that peer review is actually happening.

A second potential information source is natural language processing. Automatic detection of national dialects of English (or for that matter other languages) is relatively straightforward (Teahan and Harper, 2003), meaning that journals which claim to be publishing using (for example) the Publication Manual of the American Psychological Association (APA), and thus American English, but publishing mainly in some other English dialect, could be detected. Automatically checking the veracity of as many claims from as many parties as possible would seem to reduce the ability to mislead.

Conclusion

While there are undoubtedly publishers (and journals and conferences) that have dubious business and peer review practices, there is clearly a tension the ALA’s Freedom to Read Statement and Beall’s List of Predatory Publishers. This is particularly obvious in academic contexts with users who are simultaneously readers and policy-setters or influencers, as is the case in most academic institutions. There are additional problems with the List which probably mean that any design for a new metric for evaluating the transparency and peer-review should be built from the ground up, rather than being a new version of Beall’s List.

Note

This paper is a version of a presentation given at RAILS: Research Approaches, Information and Library Studies, in December 2016. In January 2017, Beall’s List and associated pages were removed from the Internet (Chawla, 2017; Straumsheim, 2017), with Beall (2017) saying ‘facing intense pressure from my employer, the University of Colorado Denver, and fearing for my job, I shut down the blog and removed all its content from the blog platform.’ This paper therefore blends the version presented last December with subsequent changes. References are to the Internet Archive versions of the List’s pages.

About the author

Stuart Yeates is a Library Technology Specialist at Victoria University of Wellington. He maintains the library's institutional repository and peer reviewed journal platforms. He has a PhD in computer science from the digital library research group at University of Waikato and more than fifty thousand edits on wikipedia. He can be contacted at syeates@gmail.com ORCID No. orcid.org/0000-0003-1809-1062

Disclaimer

As part of his job, the author does technical maintenance for several journals running on the OJS open access journal hosting platform. To the best of his knowledge these have never come to the attention of Beall or featured on his List.

References


How to cite this paper

Yeates, S. (2017). After Beall’s ‘List of predatory publishers’: problems with the list and paths forward In Proceedings of RAILS - Research Applications, Information and Library Studies, 2016, School of Information Management, Victoria University of Wellington, New Zealand, 6-8 December, 2016. Information Research, 22(4), paper rails1611. Retrieved from http://InformationR.net/ir/22-4/rails/rails1611.html (Archived by WebCite® at http://www.webcitation.org/6vOFCewk4

Check for citations, using Google Scholar