header
published quarterly by the university of borås, sweden

vol. 22 no. 1, March, 2017



Proceedings of the Ninth International Conference on Conceptions of Library and Information Science, Uppsala, Sweden, June 27-29, 2016

Scientific publications as boundary objects: theorising the intersection of classification and research evaluation

Fredrik Åström, Björn Hammarfelt and Joacim Hansson


Introduction. When using bibliometrics for research evaluation, the classification of research fields is an issue of great importance. The purpose of this paper is to outline a brief theoretical framework for analysing the role of classification in research evaluation practices.
Theory. Taking departure in the concept of ‘boundary objects’ we develop a theoretical framework for analyses of how scientific publications negotiate between different social worlds. Moreover, by adding the perspective of large evaluative infrastructures our study seeks to highlight tensions between local practices and global standards.
Empirical example. One scientific article was analysed in terms of the different ways it can be classified on author and affiliation levels, on a documental level, and on a bureaucratic level.
Discussion. Publications are boundary objects residing between social worlds: the context of communication and the context of evaluation. Tensions between social worlds become apparent in infrastructures, which aims to serve the demands both of communication and of evaluation.

Introduction

The way of categorising research fields in bibliometrics has traditionally been by using Web of Science (WoS) Subject Categories; but over the last decade, work has increasingly been done on developing more organic categorisations (e.g. Colliander, 2015; Klavans and Boyack, 2006; Ruiz-Castillo and Waltman, 2015). There is also a plethora of other classification systems being used, such as the OECD fields of science and technology classification (OECD, 2007); and in a wide variety of contexts: from local publication databases using the department affiliation of the author, to subject panels for reviewing research grant proposals. We find variations in terms of purpose of categorisation as well as purpose of evaluation, on what levels the distinctions are being made, and in terms of on what principles the categories are being defined. Are we defining the field affiliation of an individual document? Are we categorising an article by the subject category of the journal it was published in? Are we defining the research field of an author through institutional affiliation?

The classification of publications becomes an increasingly pertinent question when bibliometric measures are used for assessing research output. Research evaluation is permeating scholarly and scientific activities today; and often, research funds are being allocated based on bibliometric indicators on scholarly productivity and impact on all levels: from individual scholars to national research systems (de Rijcke et.al, 2016). The main purpose of these evaluative activities is to assign specific values to research outputs, and then, take action based on the outcome. However, before value can be assigned, the property being valued has to be identified and defined; and this process of classification is at the heart of any evaluative process. Classification is a necessary precondition for performing bibliometric analysis, and ideally the chosen categories should correspond with the research activities being evaluated (Haddow, 2015). In bibliometric research and practices, the need for taking differences between research fields into account is increasingly being emphasised, to avoid making unfair comparisons between areas with wildly differing publication and citation practices. Practically, this is done in activities ranging from the selection of publications from a certain field for publication analyses, to the construction of reference sets for field normalised citation analyses. In short, we want to be able to retrieve similar documents within the same research field. But to be able to identify documents from the same research field, one question need to be addressed: how do we define a research field and delineate it from other fields? When financial resources are being distributed based on bibliometrics taking field definitions into account, how we determine whether an article is presenting research belonging to one field or another, becomes an important question.

The problem of classification is a key topic in the bibliometric literature, and problems of field delineation, both automated and non-automated, is an ongoing discussion (e.g. Rafols and Leydesdorff, 2009; Van Leeuwen and Media, 2009). A central question has been how to define particular areas of research in bibliometric terms, and different theoretical and conceptual approaches have been proposed (Sugimoto and Weingart, 2015). Terms such as discipline, field, domain and specialty are commonly used. Each of these terms have different connotations: discipline points to the institutional characteristics such as departments and conferences, while field is a more loosely defined term that can be described as areas of common research interests.

This paper is part of an ongoing project addressing the problematic relation between research evaluation practices, bibliometrics and classification. It outlines a brief theoretical framework for analysing the role of classification in research evaluation practices, based on bibliometrics. Moreover, the paper concerns itself with classificatory practices actually being used by academic institutions and actors rather than ideal constructs designed for implementation in a specific research setting. Taking the perspective of the individual researcher, rather than a top-down view of a discipline or field, we choose to zoom in on one particular publication to illustrate the numerous ways in which this object can be classified. The suggested analytical approach is to conceptualise scholarly publications as boundary objects (Star and Griesemer, 1989), with distinct differences in terms of definition and function of the publication depending on whether it is situated in a context of scholarly communication or a context of research evaluation. The varying functions of the boundary object - in this case the scientific article - in different social worlds becomes clear when contextualised within the concept of infrastructures (Star and Griesemer, 1989), in this case publication databases, citation indices and evaluation systems, and not the least, in classification systems.

Research publications as boundary objects

In defining her concept boundary object Susan Leigh Star emphasises combinations of materialities and processes of objects placed between different social worlds (Star and Griesemer, 1989; Bowker and Star, 1999; Star, 2010). The point of departure is the investigation of “the nature of cooperative work in the absence of consensus” (Star, 2010, p. 604) in complex endeavours such as scientific practices. A boundary object has a kind of flexibility which allows it to simultaneously meet the requirements of different social worlds (or actors), yet maintaining a kind of integrity in its own right. In the original article on boundary objects, Star and Griesemer (1989) notes the necessity in creating boundary objects in scientific work, in order to achieve a common representation and informational consistency. They state that” scientists and other actors contributing to science translate, negotiate, debate, triangulate and simplify in order to work together” (Star and Griesemer, 1989, p. 388-389). They point at two separate, but related, processes needed to achieve informational consensus: methodological standardisation and the creation of boundary objects as such. In research, the scientific article is a central boundary object. The scientific article is a fundamental prerequisite, not only for the negotiation of methodological standardisation (i.e. what is a scientific article?), but also for the definition of the research fields of relevance for those contributing to scientific production and evaluation (Francke, 2008; Frohmann, 2004; Haider and Åström, in press). In short, in this paper we conceptualise scientific articles as boundary objects based on the duality present in the articles being contextualised both in systems of scholarly communication and systems of research evaluation.

We propose that the negotiating character of the scientific article is possible to define and analyse on several levels:

To understand the relation between, on one hand these levels of classification and, on the other hand research evaluation and classification practices, scientific publications need to be considered as boundary objects within research evaluation infrastructures.

Evaluative infrastructures

Through local use and processes of standardisation, boundary objects become part of infrastructures (Star, 2010). In our particular case the boundary objects (publications) becomes parts of both communicative, and of evaluative infrastructures. The role of infrastructures in accounting for research impact and its central role for bibliometric evaluation have recently been highlighted by Wouters (2014), and by Power (2015). We build on these early considerations of evaluative infrastructures by highlighting the central role of boundary objects in their formation, and we contrast the concept of infrastructures with Dahler-Larsen’s (2012a; 2012b) description of evaluation systems.

Our understanding of infrastructures is mainly derived from the work of Star and Ruhleder. They view infrastructures as the result of “tension between local, customized, intimate and flexible use on the one hand, and the need for standards and continuity on the other” (Star and Ruhleder, 1996, p. 112). Infrastructures become apparent in practice and are linked to activities. In this sense it is a relational concept; infrastructures emerge in connection to practices. Moreover, Star and Ruhleder finds that infrastructures are embedded in other technologies and social arrangements, having a reach beyond a single event or a specific context. Typically, infrastructures are also transparent; and do not have to be reinvented each time they are used. An infrastructure is often taken for granted within a particular group or community and newcomers or outsiders have to learn about it. Consequently, infrastructures both form and are formed by conventions within a community. An installed base (e.g. a digital database building on older library catalogues) is the basis of an infrastructure, allowing for backward compatibility. Finally, infrastructures embody standards, and the overall design (including standards) of infrastructures becomes visible only when they fail to work.

An important observation made by Star and Ruhleder (1996) is that infrastructures emerge to solve friction between the local (contextualised) and the global (standardised). This tension, between local specific classifications and the standardised, one-fits all, design of large-scale evaluation systems, is at the core of the problem that this study engages with. The contradictory demands from formal systems and local informal practices may result in a double bind, which ultimately has to be resolved by the user (in our case the researcher/author). Overall, the success of an infrastructure can be judged by the creation of objects and procedures. Hence, the greater agreement on definition, and stabilisation, of objects, the better an infrastructure works.

A type of infrastructure of particular interest for this study is research evaluation systems. These systems are routinised, permanent and extended over time and space, and in comparison with regular (one off) evaluations, these systems are less reliant on the views and approaches taken by individual evaluators (Dahler-Larsen 2012b). An important function of research evaluation systems is captured by Star and Ruhleder’s characterisation of infrastructures as intermediates between the local (contextualised) and global (standardised). A recent study of bibliometric evaluation systems at Swedish universities found that local evaluation infrastructures often were initiated as means for negotiating evaluation systems on the national level. Although, it was found that all these systems are unique in one way or another, they tend to be derived from the same ‘installed base’ of available publication databases and citation indices (Hammarfelt et.al, in press). Moreover, they tend to build on the same principal unit for evaluation: the publication.

Empirical example

To exemplify, we use one article to illustrate the problems we are outlining in this paper, and how these problems can be understood from a theoretical perspective. The article was not chosen at random; we deliberately selected one in our immediate vicinity that we know is well suited as an example, an article spanning over several different research fields and that can be classified in different ways depending on what method for classification is chosen (Figure 1).

Figure1: Example article: Landström, H. et.al. (2012). Entrepreneurship: Exploring the knowledge base. Research Policy, 41(7), 1154-1181.

Figure 1: Example article: Landström, H. et al. (2012). Entrepreneurship: Exploring the knowledge base. Research Policy, 41(7), 1154-1181.

Individual level

We begin with the individual and social aspects related to the authors of the article. The authors cover three different fields of research, based on how they identify themselves in relation to research fields: entrepreneurship research, innovation studies and bibliometrics/information studies. In addition to this, one author is also professor in both business management and entrepreneurship research. The institutional affiliations of the authors matches the profile we find when combining how they identify themselves as scholars and the degrees they’ve been awarded, albeit one author is not affiliated to an information studies department but to a university library. Thus, on author level we find at least four fields of research represented:

Document level

The second dimension of classification is related to the publication per se, where we can look at, on one hand, the journal the article is published in; and on the other, the article per se. The article is published in Research Policy, a journal describing itself as a “multi-disciplinary journal devoted to analysing, understanding and effectively responding to the [...] challenges posed by innovation, technology, R&D and science” (http://www.journals.elsevier.com/research-policy/). In the Web of Science (WoS)/Journal Citation Reports subject classification, the journal is categorized as both “Management” and “Planning & Development”. In the Spanish SCImago Journal Rank (http://www.scimagojr.com/), the journal is classified within three subject areas: “Business, Management and Accounting”, “Decision Sciences” and “Engineering” – which is also the subject categories used for describing the journal in Scopus; and four subject categories: “Engineering (miscellaneous)”, “Management of Technology and Innovation”, “Management Science and Operations Research“ and “Strategy and Management”.

Categorising the article becomes more complex, and dependent on where the information is gathered, as well as depending on whether descriptors used emanates from a controlled vocabulary or not, as well as if they are automatically generated or not. The keywords used by the authors to describe the article is “Entrepreneurship”, “Research field”, “Handbooks” and “Bibliometric analysis”; of which the first and the last are keywords immediately relatable to fields of research. In WoS we also find “KeyWords Plus” descriptors such as “Science Policy”, “Innovation”, “Citations” and “Economics”. In the database Business Source Complete, we find subject terms describing the article like “Entrepreneurship”, “Management research”, “Economics”, “Technological Innovations” and “Bibliographical Citations”; and the pattern in for instance Scopus is the same as the other databases.

Bureaucratic level

One source of classification for academic publications is the indexing in publication archives, which in this case can be seen as bridging the document and the bureaucratic dimensions of classification. This is partly since the classification of research publications in the Swedish publication archive SwePub (http://swepub.kb.se/) is based on the Swedish adaptation of the OECD fields of science classification, the SCB/UKÄ classification; and perhaps more importantly, since SwePub is currently being remodelled with the pronounced purpose of becoming a reliable data source for bibliometric analyses for research evaluations on both local and national level. In SwePub – as well as the local repository for the university from which the article emanates, from which SwePub harvests the bibliographical data – this particular article is classified as “Information Studies” and “Social Sciences Interdisciplinary”. Most likely, rather than relating to the topic of the article per se, these classifications emanates from a combination of the institutional affiliations of the authors: the university library as well as connections to an information studies research group, and an innovation studies research centre; and the journal in which the article is published, a multi-disciplinary journal with a focus on issues related to research policy and innovation studies. The economics and business perspective reflected in the article analysing entrepreneurship research is not visible in SwePub. As a point of comparison, another article by the same three authors using bibliometrics to compare entrepreneurship research and innovation studies in the International Entrepreneurship and Management Journal – and where the first author uses an affiliation at an entrepreneurship research centre – the SwePub classification is “Social Sciences Interdisciplinary”, but also “Economics and Business” and “Business Administration”; while the information studies aspect is missing.

Aside from the connections between the SwePub classification and the Swedish adaptation of the OECD fields of science, we could point to the current Swedish system for allocating resources between universities. The bibliometric component in this system builds on the number of publications produced (normalised against the average production within the field) and normalised citation scores (Sandström and Sandström, 2009). Additional weighting is then given depending on domain: natural sciences, humanities, medicine and social sciences. The effect of this weighting is that a highly cited article, categorised as social science, might yield a substantial return for a university. In fact, it was recently discovered that one particular highly cited social sciences article at Stockholm University contributed with over 5% of their total allocation (Nelhans, 2015). How this particular article is classified is fundamental for how it is counted in the system, and a reclassification to a more citation dense category might influence the results considerably. Hence, the bureaucratic and economic dimension comes directly into play in these processes. It needs to be acknowledged that the bureaucratic dimension is not necessarily exclusive in relation to the individual and document levels. Different research evaluation systems use different categorisations, from counting various kinds of document types, to field normalised citation counts based on journal subject categories. An obvious example in this case is the indexing of the article in SwePub, which is a case of indexing an article in a bibliographic database, however with a categorization based on a classification system originating in a bureaucratic context.

Discussion and outlook

The example article can potentially enter research evaluation systems on a number of different levels. Initially, we can make a distinction between on one hand, evaluation processes of research project and grant proposals and the peer review evaluation of the article per se; and on the other hand, the post hoc evaluations of authors and the institutional settings, both locally and on the national level, to evaluate performance. Not all of these evaluations are performed using bibliometrics, but they all have the potential to tie in to the different levels of evaluation. At the same time, depending on level of evaluation - as well as for instance what evaluation systems and what bibliometric methods being used - the subject classification of field of research varies greatly; between evaluation levels but also within these levels. In the different classifications of the article used as an example here, at the different evaluation levels, at least six different fields of research - seven if we include “Social Sciences Interdisciplinary” - are identifiable. Taking into account different levels of evaluation processes, different levels of evaluation, and the different research fields the article can be related to, we are operating classification and evaluation processes in three different dimensions. The question is, however: what are the possibilities of translating and homogenising these processes? And what happens when evaluation processes at different levels have conflicting demands?

Scientific publications are boundary objects in the sense that they “reside between social worlds” (Star 2010, p. 604). In our case two of these social worlds are the context of communication and the context of evaluation. Tensions between social worlds become apparent in infrastructures, which aims to serve the demands both of communication and of evaluation. This conflict is by no means a new phenomenon. For example, the Science Citation Index (1964) was established as a system for promoting communication, yet shortly after its inception it also became a tool for evaluation. Neither, do we claim to be unique in pointing to the inherent tension between communication systems and rewards systems. However, with the emergence of new, all-encompassing infrastructures, such as national Complete Research Information Systems (CRIS) (cf. SwePub), we suggest that the inherent tensions between communication and evaluation needs to be assessed further, in a systematic and theoretically informed way. Our conceptualisation of SwePub, and similar systems, as infrastructures which aim to solve frictions between different levels: the local (contextualised) and global (standardised), and purposes: communication and evaluation, might be a fruitful perspective when approaching this problem. The framing proposed in this paper of publications as boundary objects and publication databases as infrastructures should thus be seen as one small step in a much larger endeavour. The current focus of this paper has been on objects and infrastructures. Yet, we do acknowledge the importance of studying classification as a specific practice, and the crucial role of classificatory workers running these infrastructures – for example librarians (Åström and Hansson, 2013) – should not be underestimated.

Acknowledgements

This paper was in part funded by Riksbankens Jubileumsfond: The Swedish Foundation for the Social Sciences and Humanities (grant number SGO14-1153:1).

About the authors

Fredrik Åström (Ph.D.) is a reader in information studies and works as a specialist in bibliometrics and research evaluation systems at Lund University Library, P.O. Box 3, SE 221 00 Lund, Sweden. He specialises in research on bibliometrics and scholarly communication; and recently with a focus on effects of the use of, and policy aspects of, research evaluation systems based on bibliometrics. He can be contacted at: fredrik.astrom@ub.lu.se.
Björn Hammarfelt (Ph.D.) is a senior lecturer at the Swedish School of Library and Information Science (SSLIS), University of Borås, Allégatan 1, SE-501 90, Borås, Sweden and a visiting scholar at the Centre for Science and Technology Studies (CWTS), Leiden University. His research is situated at the intersection between information science and sociology of science, with a focus on the organisation, communication and evaluation of research. He can be contacted at: bjorn.hammarfelt@hb.se.
Joacim Hansson is Professor of Library and Information Science at the School of Cultural Sciences at Linnaeus University, P.O Box 451, SE-351 06 Växjö, Sweden. His research covers several areas, such as classification theory and history, library studies, document practice theory, and scholarly communication, with special interest in the role of academic libraries. He can be contacted at joacim.hansson@lnu.se

References

How to cite this paper

Åström, F, Hammarfelt, B & Hansson, J (2017). Scientific publications as boundary objects: theorising the intersection of classification and research evaluation. Information Research, 22(1), CoLIS paper 1623. Retrieved from http://InformationR.net/ir/22-1/colis/colis1623.html (Archived by WebCite® at http://www.webcitation.org/...)

Check for citations, using Google Scholar