header
vol. 20 no. 4, December, 2015


The focus factor: a dynamic measure of journal specialisation


Jeppe Nicolaisen
University of Copenhagen, Birketinget 6, DK-2300 Copenhagen S., Denmark
Tove Faber Frandsen
Odense University Hospital, Søndre Boulevard 29, DK-5000 Odense, Denmark


Abstract
Introduction. We present a new bibliometric indicator to measure journal specialisation over time, named the focus factor. This new indicator is based on bibliographic coupling and counts the percentage of re-citations given in subsequent years.
Method. The applicability of the new indicator is demonstrated on a selection of general science journals and on a selection of medical journals. The reference lists of each journal are compared year by year, and the percentage of re-citations is calculated by dividing the number of re-citations with the total number of citations each year.
Analysis. To validate re-citations as caused by specialisation, other possible causes were measured and correlated (obsolescence, journal self-citations and number of references).
Results. The results indicate that the focus factor is capable of distinguishing between general and specialised journals and thus effectively measures the intended phenomenon (i.e., journal specialisation). Only weak correlations were found between journal re-citations and obsolescence, journal self-citations, and number of references.
Conclusions. The focus factor successfully measures journal specialisation over time. Measures based on either simple citation analysis or bibliographic coupling are found to be close relatives. Measures based on journal self-citation are found to be only weakly correlated with the focus factor. Measures based on co-citation analysis remain to be studied and compared.


Introduction

The Philosophical Transactions of the Royal Society of London is commonly referred to as the first scientific journal. It was published for the first time in 1665, three months later than the French Journal des Scavans. Both journals published scientific information, but Journal des Scavans focused more on reviewing books than on communicating new research in article form (Meadows, 1974). These two journals, and the journals that subsequently followed, originated as the public record of the activities and interests of learned societies. Journals unrelated to scientific societies began to emerge during the eighteenth and nineteenth centuries (Meadows, 1974). As a response to the rapid increase in new research results during the nineteenth century, scientists began to specialise in sub-disciplines. Scientists were no longer able to follow the development in their own discipline as a whole, but focused instead on smaller disciplinary fractions. This development also led to the increasing specialisation of many scientific journals (Meadows, 1974).

Thus, for a long time the scientific journal has been the most important media for communicating new ideas and knowledge in science, in most of the social sciences, and to a lesser, but growing, extent in the arts and humanities. Perhaps because of this status as the most important media for scientific communication, the scientific journal as a research communication media has caught interest from a number of fields (e.g., history of science, sociology of science, linguistics, library and information science, and others). Yet, the field that has studied the scientific journal the most is undoubtedly bibliometrics. One study immediately springs to mind: Derek J. de Solla Price’s seminal discovery of the exponential growth of science that was based on studies of the oldest scientific journal (Philosophical Transactions) and other journals (Price, 1963). Countless other bibliometric studies focusing on the scientific journal have followed (too many to mention here). Some of these studies have sought to establish adequate citation-based measures for various aspects of the scientific journal. Among the most prominent of these are Burton and Kebler’s (1960) study, in which they developed a measure of the obsolescence of scientific literature (the half-life), later used for measuring the obsolescence of scientific journals (e.g., The Journal Citation Reports); and Garfield and Sher’s (1963) study, in which they developed the journal impact factor. Many alternative measures of journal obsolescence and journal impact have been developed since the 1960s, but a vital aspect of the scientific journal has thus far been largely overlooked or ignored by this line of bibliometric indicator research: journal specialisation. It is actually a bit curious since bibliometricians have long known about the important factor of specialisation for the outcome of bibliometric studies. Almost forty years ago, Henry Small pointed to the principal finding of experiments conducted one to two years before by leading bibliometricians including, amongst others, Eugene Garfield, Belver C. Griffith and himself, and concluded that the primary structural unit in science is the scientific specialty (Small, 1976, p. 67). A common critique of the journal impact factor is that the impact factor of a journal is partly determined by its level of specialisation (e.g., Seglen, 1997). Thus, to improve our interpretation of bibliometric journal indicators (e.g., the journal impact factor), we need a simple yet effective measure of journal specialisation. One that could be readily incorporated into products like, for example, The Journal Citation Reports.

Nicolaisen and Frandsen (2013) developed a simple yet effective measure for measuring the specialisation of scientific journals. They presented the idea for this new citation-based measure at the CoLIS8-conference in Copenhagen in 2013 (Nicolaisen and Frandsen, 2013) and as a brief communication published in the Journal of the Association for Information Science and Technology (Nicolaisen and Frandsen, 2015). Having tested the measure on a larger sample, we present a more detailed investigation of the new measure. Specialisation equals narrowing one’s focus and we have therefore chosen to name the new citation-based journal measure the focus factor.

The next section provides a detailed description of the focus factor including the basic theoretical assumptions it rests upon, related measures and how it is calculated. In subsequent sections we then demonstrate the application of the new indicator on a selection of scientific journals and test its validity as a measure of journal specialisation.

Related literature and measures

The new citation-based measure that we are about to present is a measure of journal specialisation. It is based on the common definition of scientific specialities and specialisation that may be found in many texts on the sociology of science and science studies (including bibliometrics). Before we present the focus factor in more detail, we will briefly outline this common understanding and definition, and briefly touch upon related bibliometric measures.

Specialities and specialisation

In his book Communicating research, Meadows (1998) discusses, among other things, the rapid growth of scientific research and how the research community has developed a mechanism for coping with the excessive information output. This mechanism is, according to Meadows (1998, p. 20), specialisation. To understand exactly what he means by specialisation, one has to examine his argument somewhat further. Meadows (1998, p. 19) asks the reader to listen to Faraday’s complaint from 1826:

It is certainly impossible for any person who wishes to devote a portion of his time to chemical experiment, to read all the books and papers that are published in connection with his pursuit; their number is immense, and the labour of winnowing out the few experimental and theoretical truths which in many of them are embarrassed by a very large proportion of uninteresting matter, of imagination, and error, is such, that most persons who try the experiments are quickly induced to make a selection in their reading, and thus inadvertently, at times, pass by what is really good.

Today there is much more information to cope with than in the days of Faraday. Therefore, one could consequently be led to believe that the problem which Faraday described is much worse in our time. However, according to Meadows (1998), it is not. The reason is that modern chemists no longer try to command what Meadows (1998, p. 20) terms the same broad sweep of their subject as chemists did in Faraday’s time. Modern chemists concentrate instead on much more restricted topics (Meadows, 1998, p. 20). Researchers have become much more specialised (Meadows, 1998, p. 20). As research has expanded, researchers have confined their attention to selected parts of it (Meadows, 1998, p. 20). Members of a discipline are therefore typically interested in only part of the field (Meadows, 1998, p. 21).

This definition of specialities resembles the idea of a social division of labour in society. In all known societies the production of goods and services is divided into different work tasks, in such a way that none of the members of a society conduct all tasks. On the contrary, the types of work tasks which an individual may conduct are often regulated by rules and individuals are often obliged to conduct certain tasks. Adam Smith (1723-1790) was the first to formulate how the social division of labour leads to increased productivity. In his book, On the wealth of nations, published in 1776, he even maintains that division of labour is the most important cause of economic growth. A famous example from the book illustrates his point. The example concerns a pin factory. According to Smith, a pin factory that adopts a division of labour may produce tens of thousands of pins a day, whereas a pin factory in which each worker attempt to produce pins from start to finish, by performing all the tasks associated with pin production will produce very few pins. What Meadows (1998) seems to have in mind when describing the strategy adopted by modern chemists is thus the strategy of a successful pin factory. Like the workers of a successful pin factory, modern chemists have divided their work tasks between them and are consequently working on different, but related, tasks. Today, there are several specialities in chemistry, including organic chemistry, inorganic chemistry, chemical engineering and many more. The same holds true for all other scientific fields. Sociologists, for instance, usually work within one of the specialities described in Smelser’s (1988) Handbook of sociology. These include, among others, the sociology of education, the sociology of religion, the sociology of science, medical sociology, mass media sociology, sociology of age and sociology of gender and sex.

Meadows (1998, p. 44) mentions that disciplines and specialities also can be produced by fusion. The combination of part of biology with part of chemistry to produce biochemistry is just one example.

Consequently, what characterises a speciality is the phenomenon or phenomena which members of the speciality study. Organic and inorganic chemistry, for instance, are different specialities because the researchers in these specialities study different phenomena. Organic chemists study materials that are carbon based, such as oil or coal, while inorganic chemists work with materials that contain no carbon or carbon-based synthetics. Sociologists of science study scientific societies while sociologists of religion study religious societies. Though most of the members of these two groups have been trained in the discipline of sociology, they belong to different sociological specialities because they study different sociological phenomena.

As noted above, Meadows’ definition of specialities corresponds to the definition usually employed in science studies. Crane and Small (1992, p. 198), for instance, explain the concept of specialities by arguing that:

clusters of related research areas constitute specialties whose members are linked by a common interest in a particular type of phenomenon or method (such as crime, the family, population, etc.). Disciplines, in turn, are composed of clusters of specialties

Small and Griffith (1974, p. 17) maintain that 'science is a mosaic of specialties, and not a unified whole', and note that specialities are the building blocks of science. Gieryn (1978), Whitley (1974) and Zuckerman (1978) claim that a problem area is made up of a number of related though discrete problems, that a cluster of related problem areas comprise a speciality, and that a scientific discipline covers a set of related specialities.

Measuring specialisation

Hagstrom (1970, p. 91-92) argues that 'it is reasonable to believe that scientists will communicate most often and intensively with others in their specialties, exchanging preprints with them, citing their work, and exchanging reprints'. Ziman (2000, p. 190) notes that 'scientific specialties often seem to be shut off from one another by walls of mutual ignorance'. These assumptions have been explored and confirmed empirically by bibliometricians.

Using simple citation analysis, Earle and Vickery (1969) investigated to what extent a variety of subject areas drew on the literature of their own area (subject self-citation) and other subject areas. They found considerable variations among the subject areas under study, which seem to fit with the assumptions regarding specialisation. Among their findings was that considerable dependence on other subject areas was found in the general science and general technology areas, whereas mathematics was found to depend little on literature from other subject areas.

Author self-citations have also been found to reflect specialisation tendencies. Often, author self-citations are frowned upon by critics of citation analysis (e.g., Seglen, 1992; MacRoberts and MacRoberts, 1989; 1996). The critics speculate or even claim (see Seglen, 1992, p. 636) that author self-citations are equal to self-advertising and, thus, that author self-citations should be eliminated from evaluative bibliometrics. Yet a study of fifty-one self-citing authors conducted by Bonzi and Snyder (1991) revealed essentially no differences between the reasons that authors cite their own work and the reasons they cite the work of others. The self-citations predominantly identified related work or earlier work that later works were built upon. Thus, author self-citations seem to indicate an author’s specialised focus on a narrow scientific problem. Early studies by Parker, Paisley and Garrett (1967) and Meadows and O’Conner (1971) also documented similar relations between specialisation and author self-citations.

Marshakova (1973) and Small’s (1973) co-citation technique provides a quantitative technique for grouping or clustering cited documents or cited authors. By measuring the strength of co-citation in a large enough sample of units (e.g., documents or authors) it is possible to detect clusters of units, which are highly co-cited. The information scientists, who became interested in this technique during the 1970s and onward, have repeatedly found that such clusters adequately represent scientific specialities (e.g., Small and Griffith, 1974; White and Griffith, 1981; White and McCain, 1998; Zhao and Strotmann, 2014).

Bibliographic coupling (Kessler, 1963) is a related method for clustering related entities. Documents (or other units of analysis) are said to be bibliographically coupled if they share bibliographic references. Bibliometricians began to take an interest in this technique during the 1990s, using it for identifying and mapping clusters of subject-related documents (e.g., Glänzel and Czerwon, 1996; Jarneving, 2007; Ahlgren and Jarneving, 2008). As shown by Nicolaisen and Frandsen (2012), bibliographic coupling has another promising potential as a measure of the level of consensus and specialisation in science. Using a modified form of bibliographic coupling (aggregated bibliographic coupling), they were able to measure the level of consensus in two different disciplines at a given time.

The focus factor

Specialisation is a process. The level of specialisation within a discipline probably increases or decreases over time. To measure this by bibliometric methods such as self-citations, co-citation analysis or bibliographic coupling, a time dimension needs to be included. The focus factor is created with this purpose in mind. Using the scientific journal as sample unit, it measures the level of specialisation by calculating overlaps in bibliographic references year by year. For example: a journal produces 1,536 references in year zero and 1,622 references in year one, 219 of which are found in the reference lists of the journal in both years. Thus, 219 out of 1,622 references in year one are similar to references found in the same journal the preceding year. This equals 13.5%, and is taken as an indicator of the level of specialisation in that particular journal in year one. The level of specialisation in year two is calculated by comparing the overlap in bibliographic references used by the same journal in year one and year two, and so on.

The method was tested by Nicolaisen and Frandsen (2013; 2015) on a selection of core journals in library and information science. The results showed that the focus factor distinguishes satisfactorily between general journals and speciality journals, and, moreover, effectively measures the level of specialisation among the selected journals.

Method

To examine the applicability of the focus factor on a wider variety of subjects and journals we have tested the measure on a selection of general science journals, general medical journals and specialised medical journals (see table 1). The general science journals include journals such as Science, Nature and PNAS (see, e.g., Fanelli, 2010), of which the two first mentioned are selected for the present study. The general medical journals selected for this study are among the most prestigious medical journals (see, e.g., Choi, Nakatomi and Wu, 2014) also known as the big five (e.g., Wager, 2005). The specialised medical journals are selected as examples from a wide range of specialist journals available on the basis of advice from two medical information specialists (MD and MSC).

Table 1. List of included journals
General science journals
Science
Nature
General medical journals
British Medical Journal
The Journal of the American Medical Association
Annals of Internal Medicine
Lancet
New England Journal of Medicine
Specialised medical journals
Ophthalmology
Archives of Ophthalmology
American Journal of Ophthalmology
British Journal of Ophthalmology
Experimental Eye Research
Investigative Ophthalmology
Journal of Clinical Oncology
JNCI: Journal of the National Cancer Institute

In order to determine the share of re-citations, the references in a specific year of each of the included journals were compared to the references in the journal in the previous year. A re-citation is defined as a 100%match between a cited reference in one year to a cited reference the previous year. This means that spelling errors, typing errors, variations of spelling and similar irregularities are potential sources of bias, but as they are expected to be evenly distributed across the data set, bias is unlikely. Data registered is name of journal, publication year, cited references in the journal and the number of instances for every reference. Some of the references appear more than once and consequently the number of re-citations depends on the total number of instances and not just the number of unique references. Information on journal, publication year and cited references in the journal was collected using Web of Science. Information on the number of instances for every reference was gathered using software developed for this specific purpose. The share of re-citations in journal j in year y is calculated as follows:

Share of re-citations = number of re-citations (j,y) / total number of references (j,y)

The following is an example of how share of re-citations is calculated: in 2011 Nature contained 32,069 references of which 4,971 were re-citations, resulting in a share of re-citations of 4,971 / 32,069 = 0.155.

In total this study analysed 4,788,579 references in 15 journals from 1991 to 2012 and calculated the re-citation share. Only articles, notes, reviews and letters were included. Letters are included as recommended by Christensen et al. (1997).

Results

The journals included in the analyses are specialised to a varying degree. The share of re-citations varies from 0.05 to more than 0.2, i.e. about 20%of the references in any given year appeared in that specific journal the previous year. Figures 1 and 2 are illustrations of the development in levels of specialisation from 1991 to 2012. For specific counts, see appendix.

Figure 1 presents the results of the analyses of the general science journals and the general medical journals.

Figure 1: Level of specialisation (general science journals and general medical journals)

Figure 1: Level of specialisation (general science journals and general medical journals)

One journal stands out in this figure as it is characterised by a greater extent of specialisation, particularly in the last five to six years. Nature appears to be more highly specialised than the other journals depicted in figure 1. This tendency seems to decrease during the first decade of analysis, but increases during the last. The other general journal, Science, on the other hand also starts out highly specialised but moves towards less specialisation during the entire period.

Figure 2 provides an overview of the results of the analyses of the specialised medical journals. To be able to compare the results, the units on the horizontal axes of figures 1 and 2 are the same.

Figure 2: Level of specialisation (specialised medical journals)

Figure 2: Level of specialisation (specialised medical journals)

The specialised medical journals show great variation with a share of re-citations ranging from 0.10 to 0.37. Some are specialised at a level resembling more general journals, whereas for other journals 30% of the references in some years appeared in that specific journal the previous year.

Nicolaisen and Frandsen (2013) analysed whether the levels of re-citations can be explained by obsolescence. They tested the hypothesis by examining the age distribution of the references in the journal, measured by the half-life or median citation age. A discrete analysis method was applied as publication years were treated as discrete units not a continuum of dates in terms of intervals. The correlation was positive, i.e. journals including a relatively large share of older references are characterised by a greater level of specialisation – all other things equal. Journals with relatively recent references have fewer re-citations simply because there are more references in those journals that could not have been cited the year before. The hypothesis is also tested on this data, yielding a similar result. Figure 3 provides an illustration of the correlation and, parallel to the previous analysis, the r-squared indicates that median citation age alone does not explain the different levels of re-citation. For specific counts, see appendix.

Figure 3: The median citation age and share of re-citations

Figure 3: The median citation age and share of re-citations

Turning to another commonly used measure for specialisation that could potentially explain the differences in levels of re-citation, we will now analyse the correlation between share of re-citations and journal self-citations. Share of self-citations is measured for each journal in the entire time period and correlated with share of re-citations. Figure 4 depicts some correlation, although definitely not a very strong one. For specific counts, see appendix.

Figure 4: Journal self-citations and share of re-citations

Figure 4: Journal self-citations and share of re-citations

The r-squared value of 0.19 confirms that journal self-citations and share of re-citations are not to be considered similar measures.

Finally, we examine whether the differences in levels of re-citation are caused by differences in number of references. Some might argue that larger journals have more references that may be re-cited. However, the measure is not absolute and consequently larger journals should not be exhibiting higher levels of re-citation. Share of self-citations is measured for each journal in the entire time period and correlated with the total number of references during that year.

Figure 5: Number of references and share of re-citations

Figure 5: Number of references and share of re-citations

Figure 5 depicts very weak correlation between the number of references and share of re-citations. For specific counts, see appendix.

Discussion and conclusion

Scientific journals serve several purposes. Among the most important of these are credit, dissemination and archiving of research results. Although scientific journals may be said to share vital characteristics, the way they serve their purposes, the means by which they seek to serve them, and their success in serving them, are at best diverse. Thus, seeking to develop a single and unique measure of scientific journals is impossible. As noted by Rousseau (2002), the quality of a journal is a multifaceted notion necessitating a whole battery of indicators. The focus factor is a new contribution to this battery of indicators.

Measuring the level of specialisation is a novelty in bibliometric indicator research. We believe that the level of specialisation is an important aspect of scientific journals. Yet, like other indicators, the focus factor measures only one aspect of scientific journals and only becomes interesting when other aspects are taken into account as well. Moreover, like other indicators, a single meter reading cannot be taken as definite proof. The measure should be applied over time, resulting in several meter readings that should be compared with readings from other journal indicators. Only by this approach may we get an adequate picture of a scientific journal.

When measuring the overall level of specialisation in the two groups of journals, the focus factor is quite capable of distinguishing between general and specialised groups of journals. The general journals presented in figure 1 show figures in the range of 0.05 to around 0.2. The specialised journals presented in figure 2 show figures in the range of 0.1 to 0.35. Yet we find overlapping meter readings in the two groups of journals (between 0.1 and 0.2). Focusing specifically on Nature, we find what appears to be a somewhat more specialised journal. Yet, Nature is normally said to be a general science journal. Logically, either the focus factor is failing or the assessment of Nature as being a general science journal per se is wrong. We believe the latter is the case. When applying the focus factor to a selection of library and information science journals, Nicolaisen and Frandsen (2013) found a similar deviant: Journal of the Association for Information Science and Technology (JASIST). When measuring the level of specialisation using the focus factor, they found that JASIST generally had higher scores than most of the specialised journals in the field. JASIST is a journal that seeks to cover the field at large and would therefore normally be said to be a general journal. Looking a bit deeper into this apparent anomaly, Nicolaisen and Frandsen (2015) found that the high scores of JASIST were mainly caused by a large corpus of bibliometric papers published in JASIST. Thus, they found that it was not a failure of the focus factor, but instead that JASIST over time has shifted its focus more toward bibliometrics, thus becoming gradually more and more specialised. The same is probably the case with Nature. A deeper study will probably reveal a couple of favourite topics of the journal (e.g., cell biology, nuclear physics, astrophysics or even anthropology) leading to a corpus of specialised papers with higher degrees of re-citations. An important consequence of these findings is that the binary notion of general and specialised journals is probably too limiting. In reality, a much richer scale exists.

Previously, specialisation has been measured using other measures:

  1. Simple citation analysis (Earle and Vickery, 1969)
  2. Author self-citations (Parker, Paisley and Garrett, 1967; Meadows and O’Conner, 1971)
  3. Co-citation (e.g., Small and Griffith, 1974; White and Griffith, 1981; White and McCain, 1998; Zhao and Strotmann, 2014)
  4. Bibliographic coupling (e.g., Glänzel and Czerwon, 1996; Jarneving, 2007; Ahlgren and Jarneving, 2008; Nicolaisen and Frandsen, 2012)

To some extent, the focus factor may be seen as a further development of 1 and 4. Instead of focusing on subject areas and the extent to which they rely on own literature (as Earle and Vickery (1969) did), the focus factor focuses on scientific journals and the extent to which they rely on own literature (defined as literature used and cited the year before). Clearly, such a relation is equal to a bibliographic coupling. Thus, the finding that the focus factor is an adequate measure of scientific specialisation was expected. Likewise, measuring the level of specialisation by journal self-citation was also expected to perform well. However, although we are able to document some correlation between journal re-citation and share of journal self-citation, these measures should not be considered the same.

Finally, it is worth noticing that the scientific communication system is continually changing. New media of communication are constantly surfacing (e.g., mega-journals like PLOS ONE) and readers are provided with new tools for finding and keeping up to date with the developments in their fields of interest (e.g., RSS feeds, searching Google Scholar, etc.), making the context of the journal less visible (e.g., Lozano, Larivière and Gingras, 2012). Journal indicators like the focus factor are of course only relevant indicators as long as the scientific journal remains the preferred medium for scientific communication.

Acknowledgements

The authors would like to thank David Hammer and Anne Poulsen for their competent assistance with data collection, and the two anonymous referees for their valuable suggestions for improvements.

About the authors

Jeppe Nicolaisen is associate professor at University of Copenhagen. He received his PhD in library and information science from the Royal School of Library and Information Science, Copenhagen, Denmark. He can be contacted at: Jep.nic@hum.ku.dk
Tove Faber Frandsen is head of Videncentret at Odense University Hospital, Denmark. She received her PhD in library and information science from the Royal School of Library and Information Science, Copenhagen, Denmark. She can be contacted at: t.faber@videncentret.sdu.dk

References
  • Ahlgren, P. & Jarneving, B. (2008). Bibliographic coupling, common abstract stems and clustering: a comparison of two document-document similarity approaches in the context of science mapping. Scientometrics, 76(2), 273-290.
  • Bonzi, S. & Snyder, H.W. (1991). Motivations for citation: a comparison of self citation and citations to others. Scientometrics, 21, 245-254.
  • Burton, R.E. & Kebler, R.W. (1960). The half-life of some scientific and technical literatures. American Documentation, 11(1), 18-22.
  • Choi, Y.M., Nakatomi, D. & Wu, J.J. (2014). Citation classics and top-cited authors of psoriasis in five high-impact general medical journals, 1970-2012. Dermatology Online Journal, 20(5). Retrieved from http://escholarship.org/uc/item/69n5m3v6#page-1 (Archived by WebCite® at http://www.webcitation.org/6bBqgz8lr)
  • Christensen, F.H., Ingwersen, P. & Wormell, I. (1997). Online determination of the journal impact factor and its international properties. Scientometrics, 40(3), 529-540.
  • Crane, D. & Small, H. (1992). American sociology since the seventies: the emerging identity crisis in the discipline. In T.C. Halliday & M. Janowitz (Eds.). Sociology and its publics: the forms and fates of disciplinary organization (pp. 197-234). Chicago, IL: University of Chicago Press.
  • de Solla Price, D.J. (1963). Little science, big science. New York, NY: Columbia University Press.
  • Earle, P. & Vickery, B. (1969). Subject relations in science/technology literature. Aslib Proceedings, 21(6), 237-243.
  • Fanelli, D. (2010). "Positive" results increase down the hierarchy of the sciences. PLOS One, 5(4). Retrieved from http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0010068 (Archived by WebCite® at http://www.webcitation.org/6bBqIKtHl)
  • Garfield, E. & Sher, I.H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14(3), 195-201.
  • Gieryn, T.F. (1978). Problem retention and problem change in science. Sociological Inquiry, 48(3-4), 96-115.
  • Glänzel, W. & Czerwon, H.J. (1996). A new methodological approach to bibliographic coupling and its application to the national, regional and institutional level. Scientometrics, 37(2), 195-221.
  • Hagstrom, W.O. (1970). Factors related to the use of different modes of publishing research in four scientific fields. In C.E. Nelsen & D.K. Pollock (Eds.), Communication among scientists and engineers (pp. 85-124). Lexington, MA: Lexington Books.
  • Jarneving, B. (2007). Bibliographic coupling and its application to research-front and other core documents. Journal of Informetrics, 1(4), 287-307.
  • Kessler, M.M. (1963). Bibliographic coupling between scientific papers. American Documentation, 14(1), 10-25.
  • Lozano, G.A., Lariviére, V. & Gingras, Y. (2012). The weakening relationship between the impact factor and papers’ citations in the digital age. Journal of the American Society for Information Science and Technology, 63(11), 2140-2145.
  • MacRoberts, M.H. & MacRoberts, B.R. (1989). Problems of citation analysis: a critical review. Journal of the American Society for Information Science, 40(5), 342-349.
  • MacRoberts, M.H. & MacRoberts, B.R. (1996). Problems of citation analysis. Scientometrics, 36(3), 435-444.
  • Marshakova, I.V. (1973). A system of document connection based on references. Scientific and Technical Information Serial of VINITI, 6(2), 3-8.
  • Meadows, A.J. (1998). Communicating research. San Diego, CA: Academic Press.
  • Meadows, A.J. (1974). Communication in science. London: Butterworth.
  • Meadows, A.J. & O’Connor, J.G. (1971). Bibliographical statistics as a guide to growth points in science. Social Studies of Science, 1(1), 95-99
  • Nicolaisen, J. & Frandsen, T.F. (2013). Core journals in library and information science: Measuring the level of specialisation over time. Information Research, 18(3), paper S05. Retrieved from http://InformationR.net/ir/18-3/colis/paperS05.html (Archived by WebCite® at http://www.webcitation.org/6ahaxXJzO)
  • Nicolaisen, J. & Frandsen, T.F. (2015). Bibliometric evolution: is the Journal of the Association for Information Science and Technology transforming into a specialty journal? Journal of the Association for Information Science and Technology, 66(5), 1082-1085.
  • Nicolaisen, J. & Frandsen, T.F. (2012). Consensus formation is science modeled by aggregated bibliographic coupling. Journal of Informetrics, 6(2), 276-284.
  • Parker, E.B., Paisley, W.J. & Garrett, R. (1967). Bibliographic citations as unobtrusive measures of scientific communication. San Francisco, CA: Stanford University.
  • Rousseau, R. (2002). Journal evaluation: technical and practical issues. Library Trends, 50(3), 418-439.
  • Seglen, P.O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628-638.
  • Seglen, P.O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079), 498-502.
  • Small, H. (1973). Co-citation in the scientific literature: a new measurement of the relationship between two documents. Journal of the American Society of Information Science, 24(4), 265-269.
  • Small, H. & Griffith, B.C. (1974). The structure of scientific literatures 1: identifying and graphing specialties. Science Studies, 4(1), 17-40.
  • Small, H.G. (1976). Structural dynamics of scientific literature. International Classification, 3(2), 67-74.
  • Smelser, N.J. (Ed.). (1988). Handbook of sociology. Newbury Park, CA: Sage Publications.
  • Wager, E. (2005). Getting research published: an A-Z of publication strategy. Oxford: Radcliffe.
  • White, H.D. & Griffith, B.C. (1981). Author cocitation: a literature measure of intellectual structure. Journal of the American Society for Information Science, 32(3), 163-171.
  • White, H.D. & McCain, K.W. (1998). Visualizing a discipline: an author co-citation analysis of information science, 1972-1995. Journal of the American Society for Information Science, 49(4), 327-355.
  • Whitley, R. (1974). Cognitive and social institutionalisation of scientific specialties and research areas. In R. Whitley (Ed.), Social processes of scientific development (pp. 69-95). London: Routledge and Kegan Paul.
  • Zhao, D.Z. & Strotmann, A. (2014). The knowledge base and research front of information science 2006-2010: an author cocitation and bibliographic coupling analysis. Journal of the Association for Information Science and Technology, 65(5), 995-1006.
  • Ziman, J.M. (2000). Real science: what it is, and what it means. Cambridge: Cambridge University Press.
  • Zuckerman, H. (1978). Theory choice and problem choice in science. Sociological Inquiry, 48(1), 65-95.
How to cite this paper

Nicolaisen, J. & Frandsen, T.F. (2015). The focus factor: a dynamic measure of journal specialisation. Information Research, 20(4), paper 693. Retrieved from http://InformationR.net/ir/20-4/paper693.html (Archived by WebCite® at http://www.webcitation.org/6cduJHSF8)

Check for citations, using Google Scholar


Appendix

American Journal of Ophthalmology

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199143616674430.1027
199241226226260.1526
199345116695820.1296
199444826195720.1287
199543865456210.1426
199641865586530.1566
199745745625810.1277
199844806137000.1567
199948015756100.1277
200052687178490.1616
200153807449570.1787
200255927619290.1667
2003674892412440.1846
20047921101915180.1926
2005706297316210.2307
2006759398015030.1987
2007704188314270.2036
2008755192515250.2026
2009727696316600.2286
2010669785214120.2117
20118070102814780.1837
20128477106918220.2156

Annals of Internal Medicine

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199193827689910.1064
19921094583910310.0945
1993118778339600.0815
1994117358309640.0825
19951039673310620.1025
199699077258610.0875
1997100557459020.0905
199895187079610.1015
199991896448380.0915
200083105757550.0915
2001103336129140.0885
200295335538070.0855
2003107425849580.0895
2004903660111420.1265
200599985869180.0925
200679844267370.0925
200783104786120.0745
200889525168450.0945
200986254647940.0924
201086344806280.0735
201178924446410.0815
201295795028420.0885

Archives of Ophthalmology

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199253048367660.1446
199349117997430.1516
199445817287430.1626
199548148027650.1597
199652817308100.1537
199752658037430.1417
199852267197080.1357
199950256636540.1307
200050747027480.1478
200156887988370.1478
200255617188320.1507
200362188139440.1527
2004649888510460.1617
200557167949830.1728
200660247218950.1498
2007615466210230.1667
2008680671510150.1497
2009673675911110.1657
201064677339500.1478
201159147098440.1437
201256625857550.1337

British Journal of Ophthalmology

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199240843464150.1028
199343153304310.1007
199445463484640.1027
199556874206930.1227
199658523796470.1117
199762554326530.1047
1998716849810200.1427
1999661343210980.1667
2000728352410070.1387
2001786154010940.1397
2002769657212080.1578
2003896868013110.1467
2004873365514190.1627
2005876063613820.1587
2006827765814070.1706
2007824467513650.1667
2008864663014850.1726
2009787358314150.1807
2010839462112440.1487
2011959472913670.1427
2012713354010150.1427

British Medical Journal

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199214323290017180.1203
199311621247312380.1073
199415556350115720.1013
199513888309417590.1273
199614636297216610.1134
199715562294218280.1173
199815719280119730.1264
199914533246517220.1184
200013650233115360.1133
200112999234414870.1144
200213440198511840.0884
200311495172911210.0984
200412710162410010.0794
20051111715119220.0834
2006942411865490.0584
200771278033640.0514
2008954610264500.0475
20091291214268850.0695
201013139146011890.0905
201111880133910040.0855
201214456160011360.0795

Experimental Eye Research

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
1992677385415140.2247
1993556157110830.1956
199457416059720.1696
199557215489690.1697
199656505428730.1557
1997756565812460.1657
1998589656110480.1787
199958444788050.1387
200055694248330.1507
200163924577900.1247
200261884178800.1427
200366605129560.1447
2004974675115180.1567
2005802249013280.1667
20061249766917620.1417
2007949055118180.1927
2008824552313530.1647
20091367594123190.1708
2010934455518810.2018
2011877058417120.1958
201278585099980.1278

Investigative Ophthalmology

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199210035139720110.2007
199310181135321340.2106
199412631167425820.2046
199510389146022340.2156
199610885159122270.2056
199711367142622510.1986
199812094160524500.2036
199915136195931710.2106
200020404264749470.2426
200118220252648880.2686
200220017270551060.2556
200328026369368870.2466
200425145335979100.3156
200526284327672500.2767
200629810391485270.2867
200730342406292020.3037
200829703395889740.3027
200932114438393200.2907
2010382525091109790.2877
2011525567582168550.3217
2012421326379155600.3697

Journal of Clinical Oncology

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
1992825194720790.2525
199310615134326060.2465
199411970152830800.2575
199513082163934790.2665
199613508183136530.2705
199714480197338680.2675
199818144252745490.2515
199917824232646980.2645
200018828253849140.2615
200118888254446850.2485
200220896258551490.2465
200321816285055090.2535
200421055276253440.2546
200542562546694880.2235
200629376426588700.3025
200729974481579570.2655
200828331453375540.2675
200930885525281190.2635
201029775518985900.2885
201125727422668700.2675
201223360397756170.2405

Journal of the National Cancer Institute

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199259763868660.1454
1993767946611610.1514
1994694554111720.1694
1995648347610070.1554
1996698643810650.1524
1997778161710260.1324
1998745143610210.1374
1999942466113130.1394
2000873057814390.1654
2001788858010200.1295
200275585028420.1115
200376785298770.1145
2004745955510540.1415
200570695279030.1285
200671713968790.1235
200762394647860.1265
200863464006360.1005
200959953595420.0905
201062873726720.1075
201165203106920.1065
201247412284620.0975

Lancet

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199217735261416730.0943
199316980243717500.1033
199416924239215250.0903
199515848211013650.0863
199616238255812380.0763
199719403254915650.0813
199819294259316460.0853
199920352263917970.0884
200017515221016120.0924
200117046221214820.0874
200219134214916230.0854
200322174211115830.0714
200421153178514220.0674
200520026165114220.0714
200620725173011100.0544
200723483185215370.0654
200819404171916300.0844
200919173149713440.0704
201018130168914760.0814
201118866178314490.0774
201222419179316610.0744

Nature

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199230289411057830.1914
199329756360355100.1854
199427149341450460.1864
199528094337944290.1583
199627319314841980.1544
199727920316844080.1584
199828768311243820.1524
199927159285538570.1424
200032966332143280.1314
200132088308944730.1394
200230279301941600.1374
200328049283836860.1314
200428292291736520.1294
200531893315539160.1234
200631105335543210.1394
200730179299040470.1345
200831998315943910.1375
200932416330950010.1545
201032133312449400.1545
201132069309649710.1555
201232787319451560.1575

New England Journal of Medicine

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199217138176718810.1104
199318572178519770.1064
199417487173516940.0974
199517118162518010.1054
199616245163916990.1054
199715770161716540.1054
199815719147813360.0855
199916487156514170.0865
200015488150214760.0954
200116184151114670.0915
200215026150614370.0965
200317181173414810.0864
200416318167414440.0884
200515508153414350.0934
200614332148812570.0884
200715873161614340.0904
200816550153014930.0904
200915823143813590.0864
201014992147312270.0824
201115602155013070.0845
201214801166312580.0855

Ophthalmology

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
1992595078210150.1717
1993717899413110.1837
1994633185812970.2057
19956683102614670.2207
19969179150317330.1899
19977861127718520.2367
19988863143219140.2167
19999234139522670.2467
20008719136118830.2167
20018851146719660.2227
20029210144320520.2238
20039534155022410.2357
20049312143321310.2297
20059367140221970.2357
20069890162124810.2517
20079829156225920.2647
200810416158425940.2497
200910382161725430.2456
201010545160226020.2477
201110970186628520.2607
201211186175625510.2287

Science

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199233424245551350.1543
199335525282257090.1614
199434692267053430.1544
199535433288054390.1544
199637300292155870.1504
199735692273151710.1454
199833961276847090.1394
199933434260245460.1364
200029436235038420.1314
200127901234434350.1234
200230331245934640.1144
200325921224030240.1174
200425827230824920.0964
200526377235425530.0974
200624885219925020.1015
200724439222323390.0964
200824003219124610.1035
200925245234622490.0895
201025401214322340.0885
201125966215921080.0815
201225793217322260.0865

The Journal of the American Medical Association

YearReferencesSelf-citationsRe-citationsTFFMedian cited halflife
199216383167218890.1154
199315229167917850.1174
199414988174620110.1344
199515464177719430.1264
199615290155717660.1164
199717411174817720.1024
199815720172518630.1194
199914951155318170.1224
200014770148518550.1264
200116079138816910.1054
200218069166921080.1175
200317338169823140.1334
200414948134618710.1255
200515857125516600.1055
200613982124116480.1184
200713190103514400.1095
20081296090810890.0845
200912215104911120.0915
20101060091310970.1035
2011103217919920.0965
2012112988459910.0885