The Australian Historical Association has recently opened a consultation on producing a ranked list of history journals. The ranking would determine the ‘quality’ of journals to enable easy quantification. The idea will not be unfamiliar to many academics who remember similar lists designed to aid Research Assessment Exercises in the UK and Excellence in Research for Australia. It is driven by demands from management in a number of Australian institutions to quantify and measure the research of their staff, but with minimal cost in time and effort.
This is a conservative and retrograde step that professional historians have fought hard to resist. It is inevitable that rankings will force scholars to publish in particular outlets; as a result, ranking journals impacts on our academic freedom, our ability to innovate, and the long-term development of our fields of study. The decision of the committee to engage in this process could do irreparable damage to the historical profession.
Damage to the field
Journal rankings lists are innately conservative. Studies have demonstrated that the higher the journal’s ranking, the narrower its disciplinary focus. Not only are inter- and multi-disciplinary outlets generally poorly ranked, but so are journals focused on sub-disciplines that are not central to the disciplinary focus of the ranking committee or which contribute to local, rather than international, scholarship. Given that history is a particularly broad church, the sidelining of sub-disciplines is almost inevitable. Fields that engage in systems of ranking therefore have areas of research with no “high-quality” journals at all. As journal rankings lists are relatively static, they also fail to take into account new fields and emerging areas of research, whose journals have not had time to develop or be ranked.
This is a process that punishes small fields, interdisciplinary work and new research areas. It may cause the field of history to stagnate, as new perspectives from the fringes will not exist to challenge and disrupt core narratives in the field. Voices from the margin have been vital – perhaps particularly in history – to disrupting academic hegemonies and rejuvenating scholarship. Women’s history provides a key example of a once marginal field that transformed the study of history, not only through including the voices of women, but by challenging the boundaries of what mattered, what should be counted, and what shaped our social, economic and political worlds.
Damage to intellectual freedom
Journal quality rankings are never created in a neutral or apolitical environment. They reflect current disciplinary power hierarchies as much as they may reflect quality. Journals are also run by human beings who often bring their own biases about research quality to the table, placing some high-ranking journals out of limits for particular types of scholarship.
Given the messy, political and human processes involved in determining journal quality, ranking journals can become an act of intellectual hegemony (as defined by UNESCO) asserting that particular types of knowledge and ways of practising scholarship are more valuable than others. This challenges basic principles of pluralism and tolerance, and it undermines the operation of academic solidarity by divisively fracturing knowledge communities into ‘quality’ and ‘not quality’, restricting the possibilities of new forms of engagement and practice. It acts to exclude scholars working to rethink the nature of knowledge and the bases of academic practices, and in this we can point in particular to the work of indigenous scholars who seek to challenge the cultural hegemony of Western knowledge systems. The academy provides a key protection and support for such scholarship; yet journal ranking systems destroy such exciting and democratically vital possibilities for knowledge practices.
Damage to new ways of publishing
The focus of ranking lists on well-established journals also undermines scholars working to rethink the nature of academic publishing, particularly open access. There is increasing evidence that open access publications are more likely to be read and downloaded, and also cited. Yet, most humanities and many social science journals are not automatically open access and require significant payments from authors in order for their research to be made open access. There is also increasing concern from many scholars about the high cost of academic publishing, particularly of books. As a result, a number of academics are exploring alternative – but nonetheless rigorous and peer-reviewed – approaches to mainstream academic publishing. These decisions are sometimes underpinned by important commitments to open access academic research, to spreading knowledge beyond the academy, and to the democratisation of knowledge.
Publication ranking lists make this sort of social and political engagement impossible. This is anti-innovation, because it prevents scholars from engaging in this important academic movement with its potential to revolutionise how we make and enable access to knowledge.
Damage to those of us resisting the quantification of knowledge
I understand that there is a demand from institutions to quantify and evaluate the quality of our research. However, it is not clear that this ranking system for journals is an effective or impartial way to achieve that goal. Indeed, I would argue that when ranking lists are used within universities they restrict our academic freedom. The principle of academic freedom is widely recognised as vital to the functioning of the modern university, necessary to the production and democratisation of knowledge. Ranking lists that are tied to promotion, funding or other rewards undermine the basic principles outlined by UNESCO that academics “should be free to publish the results of research and scholarship in books, journals and databases of their own choice” (see esp paragraphs 12, 20 and 29 of the UNESCO resolution).
Rather than supporting the creation of journal ranking lists and contributing to the reduction of our academic freedoms, the AHA and similar historical bodies should make a clear statement explaining how ranking systems damage the field. Nothing less than the vitality, innovation and future of historical research is at stake.
Really excellent article. I think it’s interesting how contradictory research priorities are in the modern university. On the one hand, we spend a lot of time talking about impact and public engagement, on the other we’re told that we’re worthless if we’re not publishing in often inaccessible journals.
Agree that it is a great article, and needs to be widely disseminated. Applies equally well to sociology and anthropology.
So strange that fifteen years ago in latin american studies everyone was talking about the importance of ‘decentering knowledge’ away from the US and the Global North. What has happened? The exact opposite
It is important to ask: Why is that?
A very helpful overview of an increasingly difficult situation, Katie. The likely outcome is censorship of academic ideas based on managerial priorities.
The advent of the metric driven world, albeit derived from our neoliberalist bent toward measuring everything through KPI’s was and is seen as the answer to drive quality and it has been so for some four decades – welcome to the club history. However, with the positive intent comes the restrictive nature of limiting innovation. For how does a suitable upstart publisher that may well represent advancement and opportunity to say create a journal that solely focuses on an unrepresented sub field establish a foothold, for example ‘Historical Leadership’; a journal say that looks specifically at this aspect would never stand to gain acceptance for the incentivisation is lost in the quality driven metrics that are predetermined by some hierarchical entity or individual who arbitrarily assigns a value that excludes such periodicals as not being of scholarly standard.
This holds true for book publishers too, for the movement toward this discriminatory practice to bolster status without regard for representation will only serve to stagnate and limit choices, as those journals and book publishers will fall by the way side due to the simple fact that they will not be targeted. If there is a need to advocate assigning a value to journals then let it be such that all journals are ranked and can freely move up and down akin to the English Premier League, such that upstarts can be assigned ranking of 1 with the potential of an elite 5, which represents those select few. The only way forward is to advocate for equality in affording everyone a rank, rather than excluding sub fields and limiting future innovation for new periodicals, or the resulting consequence will be that there will be less journals as those who are not ranked will suffer a demise, for few will hold them in regard due to our predilection for only those that are ‘fit-for-purpose’.