Improving the Clarity of Journal Abstracts in Psychology: The Case for Structure

James Hartley
Keele University, UK


ABSTRACT
Background. Previous research with structured abstracts has taken place in mainly medical contexts. This research indicated that such abstracts are more informative, more readable, and more appreciated by readers than are traditional abstracts.
Aim. The aim of this study was to test the hypothesis that structured abstracts might also be appropriate for a particular psychology journal.
Method. 24 traditional abstracts from the Journal of Educational Psychology were re-written in a structured form. Measures of word length, information content and readability were made for both sets of abstracts, and 48 authors rated their clarity.
Results. The structured abstracts were significantly longer than the original ones, but they were also significantly more informative and readable, and judged significantly clearer by these academic authors.
Conclusions. These findings support the notion that structured abstracts could be profitably introduced into psychology journals.

Keywords: abstracts; structured writing; information clarity; readability

Readers of this article will have already noted that the abstract that precedes it is set in a different way from that normally used in Science Communication (and, indeed, in many other journals in the social sciences). The abstract for this article is written in what is called a structured format. Such structured abstracts typically contain sub-headings - such as background, aim(s), method(s), results and conclusions - and provide more detail than traditional ones. It is the contention of this paper that structured abstracts represent an improvement over traditional abstracts because not only is there more information presented but also their format requires their authors to organise and present their information in a systematic way - one which aids rapid search and information retrieval when looking through abstract databases ( Hartley, Sydes and Blurton, 1996).

The growth of structured abstracts in the medical sciences has been phenomenal (Harbourt, Knecht and Humphries, 1995) and they are now commonplace in almost all medical research journals. Furthermore, their use is growing in other scientific areas, and indeed, in psychology itself. In January 1997, for instance, the British Psychological Society (BPS) introduced structured abstracts into four of their eight journals (the British Journal of Clinical Psychology, the British Journal of Educational Psychology, the British Journal of Health Psychology, and Legal and Criminological Psychology). In addition, since January 2000, the BPS has required authors to send conference submissions in this structured format, and it has dispensed with the need for the three-four page summaries previously required. These structured abstracts are published in the Conference Proceedings (e.g., see BPS 2001, 2002).

The case for using structured abstracts in scientific journals has been bolstered by research, most of which has taken place in a medical or a psychological context. The main findings suggest that, compared with traditional ones, structured abstracts:

However, there have been some qualifications. Structured abstracts:

Some authors - and editors too - complain that the formats for structured abstracts are too rigid and that they present them with a straightjacket that is inappropriate for all journal articles. Undoubtedly this may be true in some circumstances but it is in fact remarkable how the sub-headings used in the abstract for this article can cover a variety of research styles. Most articles - even theoretical and review ones - can be summarised under these five sub-headings. Furthermore, if readers care to examine current practice in the BPS journals and in their Conference Proceedings, and elsewhere, they will find that although the sub-headings used in this present paper are typical, they are not rigidly adhered to. Editors normally allow their authors some leeway in the headings that they wish to use.

In this paper I report the results of a study designed to see whether or not it might be helpful to use structured abstracts in one particular social science journal, namely the Journal of Educational Psychology (JEP). Here the abstracts are typically longer and more informative than those presented in Science Communication, and the authors are told that the abstracts for empirical articles should describe the problem under investigation; the participants or subjects, specifying pertinent characteristics such as number, type, and age; the experimental method, including the data gathering procedures and test names; the findings, including statistical significance levels; and the conclusions and implications or applications. (APA, 2001, p14.) And all of this is to be done in 120 words!

Method

Choosing and creating the abstracts

24 traditional abstracts were chosen (with permission of the authors) from Volume 92 (2000) of the JEP by selecting every fourth one available. 22 of these abstracts reported the results from typical empirical studies, and two reported the findings from research reviews. Three of the empirical abstracts contained the results from two or more separate studies.

Structured versions of these 24 abstracts were then prepared by the present author. This entailed re-formatting the originals, and including the necessary additional information obtained from the article to complete the text for five sub-headings (background, aim(s), method(s), results and conclusions). And, because structured abstracts are typically longer than traditional ones, a word limit of 200 words was imposed (as opposed to the 120 words specified by the APA's Publication Manual, 5th edition). Figure 1 provides an example of the effects of applying these procedures to the abstract of a review paper.


Figure 1. A traditional (top) and a structured abstract (bottom) for a review paper. (Traditional abstract reproduced with permission of the author and the American Psychological Association.)
Incidental and informal methods of learning to spell should replace more traditional and direct instructional procedures, according to advocates of the natural learning approach. This proposition is based on 2 assumptions: (a) Spelling competence can be acquired without instruction and (b) reading and writing are the primary vehicles for learning to spell. There is only partial support for these assumptions. First, very young children who receive little or no spelling instruction do as well as their counterparts in more 'traditional spelling programs; but the continued effects of no instruction beyond first grade are unknown. Second, reading and writing contribute to spelling development, but their overall impact is relatively modest. Consequently, there is little support for replacing traditional spelling instruction with the natural learning approach.


Background. Advocates of the 'natural learning' approach propose that incidental and informal methods of learning to spell should replace more traditional and direct instructional procedures.
Aim. The aim of this article is to review the evidence for and against this proposition, which is based on two assumptions: (a) spelling competence can be acquired without instruction, and (b) reading and writing are the primary vehicles for learning to spell.
Method. A narrative literature review was carried out of over 50 studies related to these topics with school students, students with special needs, and older students.
Results. The data suggest that there is only partial support for these assumptions. First, very young children who receive little or no spelling instruction do as well as their counterparts in more traditional spelling programs, but the continued effects of no instruction beyond the first grade are unknown. Second, reading and writing contribute to spelling development, but their overall impact is relatively modest.
Conclusions. There is little support for replacing traditional spelling instruction with the natural learning approach.

Measures

Two sets of objective computer-based measures, and two different subjective reader-based measures were then made using these two sets of abstracts. The two sets of computer-based measures were derived from (i) MicroSoft's package, Office 97, and (ii) Pennebaker's Linguistic Inquiry and Word Count (LIWC) (Pennebaker, Francis and Booth, 2001). Office 97 provides a number of statistics on various aspects of written text. LIWC counts the percentage of words in 71 different categories (e.g., cognitive, social, personal, etc). (Note: when making these computer-based measures the sub-headings were removed from structured versions of the abstracts.)

The two reader-based measures were (i) the average scores on ratings of the presence or absence of information in the abstracts; and (ii) the average scores on ratings of the clarity of the abstracts given by authors of other articles in the JEP. The items used for rating the information content are shown in Appendix 1. It can be seen that respondents have to record a 'Yes' response (or not) to each of 14 questions. Each abstract was awarded a total score based on the number of 'Yes' decisions recorded. In this study two raters independently made these ratings for the traditional abstracts, and then met to agree their scores. The ratings for the structured abstracts were then made by adding in points for the extra information used in their creation.

The ratings of abstract clarity were made independently by 46 authors of articles in the JEP from the year 2000 (and by 2 more authors of articles in other educational journals). Each author was asked (by letter or e-mail) to rate one traditional and one structured abstract for clarity (on a scale of 0-10, where 10 was the highest score possible). To avoid bias, none of these authors were personally known to the investigator, and none were the authors of the abstracts used in this enquiry.

48 separate pairs of abstracts were created, each with a traditional version of one abstract, and a structured version of a different one. 24 of these pairs had the traditional abstracts first, and 24 the structured ones. The fact that the abstracts in each pair were on different topics was deliberate. This was done to ensure that no order effects would arise from reading different versions of the same abstract (as has been reported in previous studies, e.g., Hartley and Ganier, 2000). The 48 pairs of abstracts were created by pairing each one in turn with the next one in the list, with the exception of the ones for the two research reviews that were paired together.

Results

Table 1 shows the main results of this enquiry. It can be seen, except for the average number of passives used, that the structured abstracts were significantly different from the traditional ones on all of the measures reported here.


TABLE 1: The Average Scores (M) and Standard Deviations (SD) for the Traditional and the Structured Abstracts on the Main Measures Used in This Study
  Traditional
format
N = 24
Structured
format
N = 24
Paired
t
p value
(two-tailed)
Data from MicroSoft's Office 97
Abstract length
(in words)
M 133
SD 22
186
15
17.10 <.001
Average sentence
lengths
M 24.6
SD 8.3
20.8
3.0
2.48 <.02
Percentage of passives M 32.7
SD 22.8
23.7
17.3
1.58n.s.d.
Flesch ReadingM 21.1
SD 13.7
31.1
12.1
5.23<.001
Data from Pennebaker's Linguistic Inquiry and Word Count (LIWC)
Use of longer wordsM 40.0
SD 5.3
35.8
4.6
4.69<.001
Use of common wordsM 57.7
SD 8.6
61.1
6.3
3.43<.01
Use of present tenseM 2.7
SD 2.8
4.1
1.9
2.90<.01
Reader-based measures
Information checklist scoreM 5.5
SD 1.0
9.7
1.4
13.72<.001
Clarity ratingsM 6.2
SD 2.0
7.4
2.0
3.22<.01

Discussion

To some extent these results speak for themselves and, in terms of this paper, provide strong support for structured abstracts. But there are some qualifications to consider.

Abstract length

The structured abstracts were, as expected, longer than the traditional ones. Indeed, they were approximately 30% longer, which is 10% more than the average 20% increase in length reported by Hartley (2002) for nine studies. It is interesting to note, however, that the average length of the traditional abstracts was also longer than the 120 words specified by the APA. Eighteen (i.e., 75%) of the 24 authors of the traditional abstracts exceeded the stipulated length.

Hartley (2002) argued that the extra space required by introducing structured abstracts was a trivial amount for most journals, amounting at the most to three or four lines of text. In many journals new articles begin on right-hand pages, and few articles finish exactly at the bottom of the previous left-hand one. In other journals, such as Science Communication, new articles begin on the first left- or right-hand page available, but even here articles rarely finish at the bottom of the previous page. (Indeed, inspecting the pages in this issue of this journal will probably show that the few extra lines required by structured abstracts can be easily accommodated). Such concerns, of course, do not arise for electronic journals and databases.

More importantly, in this section, we need to consider cost-effectiveness, rather than just cost. With the extra lines comes extra information. It may be that more informative abstracts might encourage wider readership, greater citation rates and higher journal impact factors - all of which authors and editors might think desirable. Interestingly enough, McIntosh et al. ( 1999) suggest that both the information content and the clarity of structured abstracts can still be higher than that obtained in traditional abstracts even if they are restricted to the length of traditional ones.

Abstract readability

Table 1 shows the Flesch Reading Ease scores for the traditional and the structured abstracts obtained in this enquiry. Readers unfamiliar with Flesch scores might like to note that they range from 0-100, and are sub-divided as follows: 0-29 college graduate level; 30-49 13-16th grade (i.e., 18 years +); 50-59 10-12th grade (i.e., 15-17 years) etc., and that they are based on a formula that combines with a constant measures of sentence lengths and numbers of syllables per word (Flesch, 1948; Klare, 1963). Of course it is possible that the finding of a significant difference in favour of the Flesch scores for the structured abstracts in this study reflects the fact that fact that the present author wrote all of the structured abstracts. However, since this finding has also occurred in other studies where the abstracts have been written by different authors (e.g., see Hartley and Sydes, 1997, Hartley and Benjamin, 1998) this finding is a relatively stable one.

The Flesch Reading Ease score is of course a crude - as well as dated - measure, and it ignores factors affecting readability such as type-size, type-face, line-length, and the effects of sub-headings and paragraphs, as well as readers' prior knowledge. Nonetheless, it is a useful measure for comparing different versions of the same texts, and Flesch scores have been quite widely used - along with other measures - for assessing the readability of journal abstracts (e.g., see Dronberger and Kowitz, 1975, Hartley, 1994, Hartley and Benjamin, 1998; Roberts, Fletcher and Fletcher, 1994; Tenopir and Jacso, 1993).

The gain in readability scores found for the structured abstracts in this study came, no doubt, from the fact that the abstracts had significantly shorter sentences and, as the LIWC data showed, made a greater use of shorter words. The LIWC data also showed that the structured abstracts contained significantly more common words and made a significantly greater use of the present tense. These findings seem to suggest that it is easier to provide information when writing under sub-headings than it is when writing in a continuous paragraph. Such gains in readability should not be dismissed lightly, for a number of studies have shown that traditional abstracts are difficult to read. Tenopir and Jacso (1993) for instance reported a mean Flesch score of 19 for over 300 abstracts published in APA journals. (The abstract to this article has a Flesch score of 26 when the sub-headings are excluded.)

Interestingly enough, there were no significant differences in the percentage of passives used in the two forms of abstracts studied in this paper. This finding is similar to one that we found when looking at the readability of well-known and less well-known articles in psychology (Hartley, Sotto and Pennebaker, 2002). The view that scientific writing involves a greater use of passives, the third person and the past tense is perhaps more of a myth than many people suspect (see, e.g., Kirkman, 2001; Riggle, 1998; Swales and Feak, 1994). Indeed the APA Publication Manual (2001) states, "Verbs are vigorous, direct communicators. Use the active rather than the passive voice, and select tense or mood carefully". (5th edition, p.41.)

Information content

The scores on the information checklist showed that the structured abstracts contained significantly more information than did the traditional ones. This is hardly surprising, given the nature of structured abstracts, but it is important. Analyses of the information gains showed that most of the increases occurred on questions 1 (50%), 3 (83%), 5 (63%) and 12 (63%). Thus it appears that in these abstracts more information was given on the reasons for making the study, where the participants came from, the sex distributions of these participants, and on the final conclusions drawn.

These findings reflect the fact that few authors in American journals seem to realise that not all of their readers will be American, and that all readers need to know the general context in which a study takes place in order to assess its relevance for their needs. Stating the actual age group of participants is also helpful because different countries use different conventions for describing people of different ages. The word 'student', for instance, usually refers to someone studying in tertiary education in the UK, whereas the same word is used for very young children in the USA. Although the checklist is a simple measure (giving equal weight to each item, and is inappropriate for review papers), it is nonetheless clear from the results that the structured abstracts contained significantly more information than the original ones and that this can be regarded as an advantage for such abstracts. Advances in 'text mining', 'research profiling' and computer-based document retrieval will be assisted by the use of such more informative abstracts (Blair and Kimbrough, 2002; Pinto and Lancaster, 1999; Porter, Kongthon and Lu, 2002; Wilczynski, Walker, McKibbon and Haynes, 1995).

Abstract clarity

In previous studies of the clarity of abstracts (e.g., Hartley 1999a; Hartley and Ganier, 2000) the word 'clarity' was not defined and respondents were allowed to respond as they thought fit. In this present study the participants were asked to 'rate each of these of abstracts out of 10 for clarity (with a higher score meaning greater clarity)'. This was followed by the explanation: 'If you have difficulty with what I mean by 'clarity', the kinds of words I have in mind are: 'readable', 'well-organized', 'clear', and 'informative'. (This phraseology was based on wording used by a respondent in a previous study who had explained what she had meant by 'clarity' in her ratings.) Also in this present study - as noted above - the participants were asked to rate different abstracts rather than the same abstract in the different formats. However, the mean ratings obtained here of 6.2 and 7.4 for the traditional abstracts and the structured ones respectively closely match the results of 6.0 and 8.0 obtained in the previous studies. Nonetheless, because the current results are based on abstracts in general rather than on different versions of the same abstract, these findings offer more convincing evidence for the superiority of structured abstracts in this respect.

Finally, in this section, we should note that several of the respondents took the opportunity to comment on the abstracts that they were asked to judge. Table 2 contains a selection from these remarks.


TABLE 2: Some Comments Made by Judges on the Clarity of the Pairs of Abstracts that They Were Asked to Judge
Preferences for the traditional abstracts

My ratings are 2 for the structured abstract and 1 for the traditional one. Very poor abstracts.

I have read the two abstracts that you sent for my judgement. I found the first one (traditional) clearer than the second (structured) one. I would give the first about 9 and the second about 8. Please note, however, that I believe that my response is affected more by the writing style and content of the abstracts than by their organization. I would have felt more comfortable comparing the two abstracts if they were on the same topic.

The first (structured) one was well organized, and the reader can go to the section of interest, but the meaning of the abstract is broken up (I give it 8). The second (traditional) abstract flowed more clearly and was more conceptual (I give it 10).

I rate the first (structured) abstract as a 7 and the second (traditional) one as an 8. I prefer the second as it flows better and entices the reader to read the article more than the first, although I understand the purpose of the first to 'mimic' the structure of an article, and hence this should add to clarity.

No clear preference for either format Both abstracts were clear and well organized. The format was different but both told me the information I wanted to know. I gave them both 8.

I found each of the abstracts in this pair to be very clear and without ambiguity. The structured abstract gives the explicit purposes and conclusions, whereas the traditional one does not, but I believe that those are unrelated to 'clarity' as you are defining and intending it - for me they represent a different dimension. I would give both abstracts a rating of 9.

I did what you wanted me to do, and I did not come up with a clear preference. My rating for the structured abstract was 9 compared to a rating of 8 for the traditional one.

Preferences for the structured abstracts Overall I thought that the structured abstract was more explicit and clearer than the traditional one. I would give 7 to the structured one and 5 to the traditional one.

I would rate the second (structured) abstract with a higher clarity (perhaps 9) and the first (traditional) one with a lower score (perhaps 4), but not necessarily due to the structured/unstructured nature of the two paragraphs. The structured abstract was longer, and more detailed (with information on sample size, etc.). If the unstructured abstract were of equal length and had sample information to the same degree as the structured abstract, they may have been equally clear.

My preference for the structured abstract (10) is strongly influenced by the fact that I could easily reproduce the content of the abstract with a high degree of accuracy, compared to the traditional abstract (which I give 6). I was actually quite impressed by the different 'feel' of the two formats.

I would give the traditional one 4 and the structured one 8. You inspired me to look up my own recent JEP article's abstract. I would give it 5 - of course an unbiased opinion!

I rated the traditional abstract 3 for clarity, and the structured abstract 7. In general the traditional abstract sacrificed clarity for brevity and the structured one was a touch verbose. Both abstracts were too general.

In general I prefer the structured layout. I have read many articles in health journals that use this type of format and I find the insertion of the organizer words a very simple, yet powerful way to organize the information.

The bold-faced headings for the structured abstract do serve an organizational function, and would probably be appreciated by students.

Overall I think that the structured format is good and I hope that the JEP will seriously consider adopting it.

Concluding remarks

Abstracts in journal articles are an intriguing genre. They encapsulate, in a brief text, the essence of the article that follows. And, according to the APA Publication Manual (2001), "A well-prepared abstract can be the most important paragraph in your article… The abstract needs to be dense with information but also readable, well organized, brief and self-contained". (p.12.)

In point of fact the nature of abstracts in scientific journals has been changing over the years as more and more research articles compete for their readers' attention. Berkenkotter and Huckin (1995) have described how the physical format of journal papers has altered in order to facilitate searching and reading, and how abstracts in scientific journal articles have been getting both longer and more informative (p. 34-35).

The current move towards adopting structured abstracts might thus be seen as part of a more general move towards the use of more clearly defined structures in academic writing. Indeed, whilst preparing this paper, I have come across references to structured content pages (as in Contemporary Psychology and the Journal of Social Psychology and Personality), structured literature reviews (Ottenbacher, 1983; Sugarman, McCrory, and Hubal, 1998), structured articles (Goldmann, 1997; Hartley, 1999b; Kircz, 1998) and even structured book reviews (in the Medical Education Review).

These wider issues, however, are beyond the scope of this particular paper. Here I have merely reported the findings from comparing traditional abstracts with their equivalent structured versions in one particular context. My aim, however, has been to illustrate in general how structured abstracts might make a positive contribution to scientific communication.

Notes

James Hartley is Research Professor in the Department of Psychology at the University of Keele in Staffordshire, England. His main interests lie in written communication and in teaching and learning in higher education. He is the author of Designing Instructional Text (3rd ed., 1994) and Learning and Studying: A Research Perspective (1998).

Originally published in Science Communication, 2003, Vol 24, 3, 366-379, copyright: Sage Publications.

I am grateful to Geoff Luck for scoring the abstract checklist, James Pennebaker for the LIWC data, and colleagues from the Journal of Educational Psychology who either gave permission for me to use their abstracts, or took part in this enquiry.

Professor James Hartley. Department of Psychology, Keele University, Staffordshire, ST5 5BG, UK; phone: 011 44 1782 583383; fax: 011 44 1782 583387; e-mail: j.hartley@psy.keele.ac.uk; Web site: http://www.keele.ac.uk/depts/ps/jhabiog.htm

References


Appendix 1

The abstract evaluation checklist used in the present study

Abstract No. ________

1. _____Is anything said about previous research or research findings on the topic?

2. _____Is there an indication of what the aims/purposes of this study were?

3. _____Is there information on where the participants came from?

4. _____Is there information on the numbers of participants?

5. _____Is there information on the sex distribution of the participants?

6. _____Is there information on the ages of the participants?

7. _____Is there information on how the participants were placed in different groups (if appropriate)?

8. _____Is there information on the measures used in the study?

9. _____Are the main results presented in prose in the abstract?

10______Are the results said to be (or not to be) statistically significant, or is a p value given?

11._____ Are actual numbers (e.g. means/correlation coefficients/t values) given in the abstract?

12 _____Are any conclusions/implications drawn?

13 _____Are any limitations of the study mentioned?

14 _____Are suggestions for further research mentioned?

Note: this checklist is not suitable for theoretical or review papers but can be adapted to make it so. It would also be interesting to ask for an overall evaluation score (say out of 10) which could be related to the individual items.