header
vol. 21 no. 4, December, 2016


Proceedings of ISIC: the information behaviour conference, Zadar, Croatia, 20-23 September, 2016: Part 1.

Writing and reading the results: the reporting of research rigour tactics in information behaviour research as evident in the published proceedings of the biennial ISIC conferences, 1996 – 2014


Lynne (E.F.) McKechnie, Roger Chabot, Nicole Dalmer, Heidi Julien and Cass Mabbott


Abstract
Introduction. This study examined if and how information behaviour researchers include research rigour tactics in reports of their research projects.
Method. A content analysis was conducted of the 193 research reports published in the 1996 – 2014 ISIC proceedings.
Analysis. Articles were coded for author affiliation, rigour tactics reported, and whether or not enough information was presented to allow readers to assess the quality of the research and replicate the study. Both quantitative (frequencies) and qualitative (excerpts from the articles) data are reported.
Results. In total 698 research rigour tactics were reported for an average of 3.6 per paper, a median of 3 per paper and a range of 0 – 20 tactics across all papers. Twenty-six papers (13.5%) included no rigour tactics at all while 8 (4.1%) included ten or more. Only 76 (39.4%) provided enough information for readers to assess the quality of the study, with fewer (n=44; 22.8%) providing enough information to allow for replication of the study.
Conclusions. Both quantitative and qualitative empirical work is not being reported in ISIC papers in ways that clearly demonstrate research rigour, nor assure replicability.


Introduction

In the introduction to their paper about the information needs of physicians at a hospital in Valencia, Spain given at one of the earliest ISIC conferences (Sheffield, 1998), Abad-Garcia, Gonzalez-Teruel and Sanjuan-Nebot (1999) point out that in addition to sharing results of studies in order to improve information sharing, research reporting can also lead to ‘the recognition of problems in methodology which are revealed when studies which have been reported are analyse’ (p. 209). Problems in methodology point to limitations and problems with the findings of a study. The authors are talking about the concept of research rigour and its importance in all stages of the research process, including communicating about research studies.

According to Merriam-Webster, the concept of rigour refers to ‘the quality or state of being very exact, careful or strict’ (Rigor, n.d.). Similarly, Pickard (2013) in her research methods text for information professionals and scholars, provides a definition for “Rigour” in her glossary of research terms: ‘[the] degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experiment’ (p. 326). In his editorial statement “Rigor – the Essence of Scientific Work” published in an issue of the Electronic Journal of Biotechnology, Allende (2004) explores this concept, writing:

Rigor is many things. It is dissatisfaction with uncertainty, with inaccurate answers, with unprecise (sic) measurements, with the spread between the plus and the minus. Rigor is also being methodical commitment (sic) to experimental procedure, to the need of controlling all parameters that can affect the results of our tests . . . Rigor is in the essence of scientific work, in each one of the stages of the research work. Rigor implies a structured and controlled way of planning, developing, analyzing and evaluating our research and a special care in adapting the presentation of the results to the demands of the audience we communicate the results of our investigations. (n.p.)

One could argue that the emergence of information behaviour as a distinct, cohesive and important area of study was marked by the first ISIC conference in Tampere, Finland in 1996. As such it is still a relatively young area of study. We agree with Abad-Garcia, Gonzalez-Teruel and Sanjuan-Nebot (1999) that rigour is central to the conduct of information behaviour research. We found ourselves wondering if this viewpoint was shared by others and, more importantly, if it was evident in the published work of other information behaviour scholars.

Literature review

In a very short recent article in Nature, Pagan and Torgler (2015) remind readers that ‘4Rs’ are required to assess papers for research rigour: reproduction, replication (provision of enough information to allow a study to be reproduced), robustness (strong design), and revelation (communicating enough information to allow for accountability and transparency) (p. 34). In their library and information science research methods text, Williamson and Johanson (2013) note

reviewers and editors pay particular attention to the elaborate detailing of research methods and techniques with a tacit assumption that the adoption of ‘rigorous’ and ‘reliable’ research methods is a guarantor of the validity of research outcomes.(p.115).

Researchers, whether qualitative or quantitative, agree that rigour in research methods is intended to guarantee that study results are valid, reliable, and trustworthy, and that published work can be replicated and evaluated.

In our review of the literature we were initially surprised to find substantially more material about rigour in qualitative, as compared to quantitative, research. However, it is likely this trend reflects the expansion of qualitative methodologies from disciplines such as anthropology and sociology to others including health sciences, education, and library and information science, a transition marked by the publication of Lincoln and Guba’s often cited landmark work Naturalistic Inquiry in 1985.

General overviews of rigour in quantitative research are most commonly found in research methods texts. For example, Babbie in his 5th edition of The Basics of Social Research identifies appropriate and strong research design, appropriate statistical analysis, appropriate sampling techniques, the ability to replicate the work, and disclosing the limitations of a study as criteria which contribute to research rigour. Connaway and Powell (2010) present a similar list, organizing it under the broader categories of universality, replication, control, and measurement. Journal articles and reports dealing with rigour in quantitative research often focus on good practices within particular disciplines such as the Society for Neuroscience’s (n.d.) “Research practices for scientific rigor: a resource for discussion, training and practice,” Claydon’s (2014) “Rigour in quantitative research,” which focuses on the particular requirements for Nursing, and the Federation of American Societies for Experimental Biology’s (2016) “Enhancing Research Reproducibility”.

For qualitative work, Lincoln and Guba (1985) provide the fundamental description of  trustworthiness, based on earlier work by Guba (1981), who focuses on credibility, dependability, confirmability, transferability, and authenticity, as markers of rigour in qualitative work. According to Krefting (1991), there are multiple tactics to achieve trustworthiness in qualitative work. To achieve credibility, a researcher should attend to prolonged field experience, time sampling, reflexivity, triangulation, member checking, peer examination, interview technique, researcher credibility, structural coherence, and referential adequacy. To achieve transferability, scholars may use nominated samples, compare their samples to demographics data, time sample, and use dense description. To achieve dependability, researchers can apply a dependability audit, dense description of research methods, stepwise replication, triangulation, peer examination, and code-recode procedures. Finally, to achieve confirmability, researchers can use a confirmability audit, triangulation, and reflexivity. Houghton, Casey, Shaw and Murphy (2013) specifically point to techniques such as prolonged engagement, persistent observation, triangulation, peer debriefing, member checking, audit trails, reflexivity, and thick descriptions as methods to achieve trustworthiness. Shenton (2004), writing in the library and information science context, provides a long and useful list of specific actions and tests for qualitative researchers to apply in a quest for trustworthiness. Many others have written about techniques to ensure trustworthiness, such as Cope (2014), and Long and Johnson (2000). Freese (2007), Kingsley and Chapman (2013), and Funder et al. (2014), emphasize transparency of data and data collection methods. Davies and Dodd (2002) point to ethical practices as being central to rigour. Rigour in qualitative work, they note, demands consideration for 'attentiveness, empathy, carefulness, sensitivity, respect, honesty, reflection, conscientiousness, engagement, awareness, openness, and context.' Many of these ethical stances point to reflexivity, which is the focus of significant writing about qualitative work (Finlay, 2006). Hall and Callery (2001), for example, focus on reflexivity and relationality with interview participants when doing grounded theory work, and Jootun and McGhee (2009) offer very specific techniques for promoting reflexivity. Porter (2007) suggests that qualitative researchers follow seven principles, and suggests questions related to each. The first is transparency (is the process of knowledge generation open to outside scrutiny?). The second is accuracy (are the claims made based on relevant and appropriate information?). The third is purposivity (are the methods used fit for purpose?). The fourth is utility (are the knowledge claims appropriate to the needs of the practitioner?). The fifth is propriety (has the research been conducted ethically and legally?). The sixth is accessibility (is the research presented in a style that is accessible to the practitioner?). And finally, the seventh is specificity (does the knowledge generated reach source- specific standards?).

While there is a significant body of literature which describes research rigour and how to achieve it, we were unable to identify, both generally and specifically for information behaviour research, any studies which systematically examined the reporting of research rigour tactics in published reports of empirical research.

Research questions

To address the gap identified in the literature we explored the following research questions:

  1. Do information behaviour researchers report methodological tactics used in their research projects so that other researchers have enough information to replicate the studies? What do they report? What do they not report? And if they report rigour tactics, where do they report them in their papers?
  2. Do information behaviour researchers share enough information so that readers can assess rigour and the validity/trustworthiness of findings? If so, how is this done?

Method

To address our research questions, we conducted a content analysis (Krippendorff, 2013) of the 193 full papers which were reports of empirical research studies found in the published proceedings of the biennial ISIC conferences from 1996 to 2014. While human information behaviour researchers communicate their work through a variety of publications and conferences, ISIC was chosen as being one of the major international venues for dissemination of human information behaviour research. While this may be regarded as a limitation to the study, it resulted in a sample which made data collection and analysis feasible and which, we argue, constitutes a reasonable representation of the information behaviour research.

The ISIC proceedings include a variety of types of papers, such as theory papers, method papers and reports of empirical research studies. Some years of the proceedings include both full and short papers. We did not analyse the short papers as the length restriction is likely to have limited the discussion of research rigour. Our first step was to look carefully at the full papers to identify the reports of empirical research, the corpus examined in this study.

Our next step was to compile a list of the tactics used by researchers to ensure rigour in their design, implementation and reporting of research studies. These were derived from the following prominent research methods texts, chosen with an emphasis on works commonly used for both teaching and research within library and information science:

To name individual research rigour tactics we adopted the language used by the authors of the texts consulted. Individual research rigour tactics were also identified as being primarily used in quantitative studies (designated by *), primarily used in qualitative studies (**) or used in mixed methods (both) approaches to research. During the coding process we did not make an assessment of whether or not a research rigour tactic was used appropriately but did look for evidence of implementation of the tactic before coding for the presence of that tactic. The following research rigour tactics, grouped by stages in the research process, were identified and provided a start list for our coding of the papers:

Design

Sampling

Data collection

Data analysis

Research reporting

Other

Papers were also coded for disciplinary affiliation and country of the first author, the types of research methods used in the study, theories used, whether or not research rigour tactics were explicitly reported, whether or not enough information was provided to allow readers to assess the rigour of the study and/or replicate the study, where research rigour tactics were reported in the article (title, abstract, introduction/literature review, method, results, discussion/conclusion) and the presence of citations to support the choice of research rigour tactics. During the coding process we kept reflective notes to document theoretical insights and kept track of particularly cogent examples related to the authors’ use (and non use) of research rigour tactics.

In order to provide a preliminary assessment of the validity of the coding instrument, all authors coded a small sample of three randomly chosen papers as a basic test for inter-coder reliability. The unit of analysis for this was the individual decision point; in other words, every time a decision was made (e.g., were research rigour tactics explicitly reported or not reported?) comprised an instance of the unit of analysis. Our coding scheme included 54 decision points. Frequencies and percentages were calculated for the number of agreements. While it is agreed there are problems associated with this particular measure, research indicates percentage agreement is the most commonly (69% of studies) used measure (Lombard, Snyder-Duch, and Bracken, 2010). For our purpose of getting a rough sense of the validity of our instrument, it worked reasonably well. The rate of agreement was 90.0%, suggesting that the coding scheme was likely valid and reliable.

The data were analysed quantitatively with frequencies and percentages calculated for all categories. Although our sample size was relatively large (n=193), the data did not meet the criterion of having five or more data points in each cell to be able to calculate tests of Chi Square to look for relationships between variables. We also looked for themes and examples apparent in the data as we read through the articles.

Results

Characteristics of the papers analysed

While the affiliations and geographic locations of authors were not reported consistently across all the ISIC proceedings (in particular this was done differently for the first two conferences in 1996 and 1998), we did code for these variables to give some sense of the background of the authors of the reports of empirical research analysed for this study. The majority of the 165 first authors with a reported affiliation to a particular discipline or organization were associated with library and information science programs (n=140; 84.8%), 21 (12.7%) with other academic programs, two (1.2%) with research centres, one (0.6%) with professional practice and one (0.6%) with another context. Of the 176 first authors for whom a geographic location was reported, 57 (32.4%) were from North America, 45 (25.6%) from Scandinavia, 36 (20.5%) from another European country, 15 (8.5%) from Australia and New Zealand, 13 (7.4%) from Asia, six (3.4%) from the Middle East and two (1.1%) from each of Africa and South America.

 Researchers used a variety of methods in their studies. Quantitative methods were used in 32 (16.6%) studies, qualitative methods in 105 (54.4%) studies and a combination of quantitative and qualitative methods in 52 (26.9%). Four (2.1%) of the 193 reports of empirical research did not provide enough information to classify them as quantitative, qualitative or both in research method approach. The most frequently used method was interviewing (n=118; 61.1%), followed by surveys (n=62; 32.1%), observation (n=50; 25.9%), diaries (n=15; 7.8%), think-aloud protocols (n=11; 5.7%), focus group interviews (n=9; 4.7%), transaction log analysis (n=9; 4.7%), social network analysis (n=7; 3.6%), experiments (n=6; 3.1%), application of standard tests (n=6; 3.1%), and bibliometric analysis (n=3; 1.6%). Other methods (including approaches such as the critical incident technique, case study, conversational analysis, eye tracking and secondary data analysis) were used a total of 32 (16.6%) times. Eighty-two (42.5%) of the papers listed one or more theoretical approaches which were used to design the studies and/or to interpret the results.

The reporting of research rigour tactics

Authors reported research rigour tactics in a variety of places in their papers. Not surprisingly this occurred most frequently in the methods section (n=140 papers; 72.5%), but also was included in results (n=17; 8.8%), discussion/conclusion (n=39; 20.2%), the introduction and literature search (n=17; 8.8%) and even in the title of two (1.0%) papers. A surprising 43 (22.3%) of papers included one or more research rigour tactics in the very brief space allocated to abstracts; in almost all cases this was a statement of sample size or methodological triangulation.

Across the 193 papers, research rigour tactics were reported 698 times, for an average of 3.6 and a median of 3 times per paper. The number of tactics reported ranged from none (n=26 papers; 13.5%) to a high of 20 (n=1; 0.5%), with only 8 (4.1%) papers describing 10 or more. Seventy-nine papers (40.9%) included one to three rigour tactics, 58 (30.1%) four to six, and 22 (11.4%) seven to nine. Frequencies for individual tactics are summarized in Table 1. In Table1 * is used to identify primarily quantitative research rigour tactics, ** primarily qualitative tactics and tactics which are used by both qualitative and quantitative approaches are not marked. The four papers which could not be classified as quantitative, qualitative or mixed methods (both) are not represented in a separate column but are included in the total number of papers and in calculations which show frequencies and percentages in relation to the total number of papers.

Research rigour tactics reported Number of papers (% of total number of papers)
Quantitative n=32 (16.6%) Qualitative
n=105 (54.4%)
Both
n=52 (26.9%)
Total
n=193 (100.0%)
Design tactics        
Triangulation 3 (9.4%) 30 (28.6%) 21 (41.2%) 54 (28%)
Pilot tests 7 (21.9) 11 (10.5) 9 (17.7) 27 (14.0%)
Total       81 (11.6%)
Sampling tactics        
Theoretical sampling ** 4 (12.5)
 
30 (28.6) 13 (25.5) 47 (24.4)
Sampling to saturation ** 1 (3.1) 4 (3.8) 4 (7.8) 9 (4.7)
Sampling strategy* 14 (43.8) 16 (15.2) 9 (17.7) 39 (20.2)
Sampling frame 11 (34.4) 13 (12.4) 6 (11.8) 30 (15.5)
Sample size 20 (62.5) 56 (53.3) 24 (47.1) 100 (51.8)
Response rate 10 (31.3) 3 (2.9) 5 (9.8) 18 (9.3)
Characteristics of respondents 15 (46.9) 44 (41.9) 18 (35.3) 77 (39.9)
Total       320 (45.8%)
Data collection tactics        
Prolonged engagement ** 2 (6.3) 21 (20.0) 3 (5.9) 26 (13.5)
Audit trail ** 0 7 (6.7) 1 (2.0) 8 (4.1)
Reflexive journal ** 0 7 (6.7) 2 (3.9) 9 (4.7)
Replication of the study 1 (3.1) 0 0 1 (0.5)
Total       44 (6.3%)
Data analysis tactics        
Member checking ** 0 8 (7.6) 2 (3.9) 10 (5.2)
Peer debriefing ** 0 6 (5.7) 1 (2.0) 7 (3.6)
Inter-coder reliability 2 (6.3) 8 (7.6) 4 (7.8) 14 (7.3)
Intra-coder reliability 0 3 (2.9) 1 (2.0) 4 (2.1)
Negative case analysis 0 4 (3.8) 2 (3.9) 6 (3.1)
Statistical analysis * 20 (62.5) 5 (4.8) 17 (33.3 42 (21.8)
Total       83 (13.3%)
Research reporting tactics        
Thick description ** 4 (12.5) 35 (33.3) 14 (27.5) 53 (27.5)
Limitations of the study 14 (43.8) 33 (31.4) 18 (35.3) 65 (33.7)
Non/negative findings 1 (3.1) 2 (1.9) 3 (5.9) 6 (3.1)
Data collection instruments 9 (28.1) 17 (16.2) 8 (15.7) 34 (17.6)
Total       158 (22.6%)
Other tactics
     
Other  4 (12.5) 4 (7.8) 4 (7.8) 12 (1.7%)
         
Total 142 (20.3%) 367 (52.6%) 189 (27.1%) 698 (100.0%)

Sampling tactics (n=320; 45.8% of the total number of tactics) were the most frequently reported, with sample size included in 100 (51.8%) of the papers and the characteristics of respondents in 77 (39.9%). Research reporting tactics constituted the next largest group with 158 incidents (22.6% of the total) of research rigour reporting tactics. Noting the limitations of the study (n=65; 33.7% of papers) and using thick description (n=53; 27.5% of papers) were the two most prominent tactics evident in this group. What is interesting about this is that these rigour tactics are foundational to quantitative research. It may be that their higher incidence is simply associated with greater familiarity of the authors with the discourses of quantitative research writing. Data analysis tactics (n=83; 13.3%), design tactics (n=81; 11.6%), and data collection tactics (n=44; 6.3%) accounted for most of the remainder of research rigour tactics reported. Other tactics listed, while quite important for research rigour, were addressed, surprisingly, in no more than one or two papers each and included things like building rapport with study participants, practicing emergent design and analysis, thoroughly training project staff, and using standard tests and instruments which had worked well in other studies.

When considered individually, quantitative papers reported more research tactics (an average of 4.4 per paper) than qualitative papers (3.5 tactics per paper) and mixed methods papers (3.6 per paper). As noted in the literature search, this may be because quantitative research methods have a longer history of use within LIS with better defined practices which are more generally understood. However, these differences are not very large.  Papers in each research approach reported an overall percentage of tactics close to the their percentage of papers overall: quantitative papers comprised 16.6% of the sample and reported 20.3% of tactics;  qualitative papers (54.4%), 48.7% of tactics; papers employing both qualitative and quantitative approaches (26.9%), 27.1% of tactics.

Examples of research rigour tactics

We encountered many particularly fine examples of research rigour writing. The following, chosen from the eight studies including more than ten rigour tactics, are representative of what we found:

Credibility and truth value were enhanced by the extended period the researcher spent in the field. Persistent observation over an extended period avoids any tendency to come to conclusions prematurely. Over a year was spent in the field conducting participant observations and interviews, and analyzing the data took another year’s time. (Hersberger, 2001, p. 127)

[Prolonged engaged; thick description]

Analytical rigor was enhanced through robust sample size and triangulation of interview approaches (narrative and elicitation) and data sources (women and health professionals). The following measures were taken to strengthen credibility: identification and exploration of negative incidents; discussion and debriefing with peers; and careful documentation at all research stages. Rich description of findings and theoretical implications facilitates transferability.(Genuis, 2015)

[Sample size; triangulation of methods and participants; negative case analysis; peer debriefing; keeping an audit trail; thick description]

Data collection used in-depth, semi-structured interviews. An interview guide (Foster, 2005 and included in Appendix A), provided an agenda for open-ended questioning. . . . By far the most important method for credibility (Lincoln and Guba, 1985) was the use of member checking; that is, the process of using participants to review the researcher's recording and interpretation of their contribution. In the study reported here member checking was utilised in four ways: (1) at the pilot stage to develop initial questions, (2) as feedback and checking on examples and themes arising throughout interviews, (3) in post-interview discussions, (4) a sample of participants reviewed full transcripts of their own interview, reviewed the general findings, and were introduced to the model, and given opportunity to discuss its relevance as a reflection of their information behaviour. The study makes no claim for generalisability, as befits naturalistic inquiry, but ensures transferability and further development of the research themes by rich description and reporting of the research process (Lincoln and Guba, 1985; Sanjek, 1990). Dependability and confirmability were addressed through research notes, which recorded coding decisions, developing themes and interpretations, and emergent theory. (Foster, 2005)

[Including data collection instruments; member checking; noting limitations of the study; thick description; keeping a reflexive journal; citations are included to support and describe rigour tactics]

There were also many particularly poor examples, including the following. The author(s) has/ have not been listed; our intent was to explore research rigour reporting practices across information behaviour researchers as a group and not to criticize the work of individuals.

This action research project occurred in two phases. The first phase involved an appreciative inquiry process resulting in an organizational realignment of personnel and the introduction of shared leadership. The second phase involved the co-design of organizational information and communication systems and subsequent implementation of initiatives.

The number of groups and participants in this study was never identified and data collection and data analysis were only described in very general terms. Other aspects related to study design, data collection and data analysis were also only presented very generally, providing no basis for assessing rigour.

Even when a paper seemed to do a relatively good job, one or more key pieces of information related to rigour was often missing. For example, Genuis’s (2015) study does a good job of describing the method used (“Semi-structured interviews”), the sample size (“n=28”), characteristics of the participants (“women who had been or were engaged in information gathering and/or decision-making related to menopause management”) and uses thick description, quoting from a methods text (“The goal of the sampling was to gather rich qualitative data “so that others outside the sample [but in the target population] might have a chance to connect to the experiences of those in it (Seidel, 1998, p. 47-48)” to support this choice. However Genuis does not provide a copy of her interview guide; nor does she identify the sampling frames from which her participants were drawn.

Other results

Showing a serious commitment to reporting research rigour tactics used in their work, authors of 34 (n=17.6%) papers provided citations to support their use of particular research rigour tactics.

As indicated at the beginning of this paper, one of the most important reasons for writing about rigour in research is to allow the reader to assess the quality of the research project and its results. We asked two more holistic questions of each paper to assess how effectively information behaviour scholars do this. We found that 44 papers (22.8%) provided enough information so that a study could be replicated (i.e., described exactly what was done during data collection and analysis so that another researcher could repeat the study to explore whether or not the results were reliable); unfortunately 149 (77.2%) did not. While more authors (n=76; 39.4%) provided enough information to assess the quality of a study, many more (n=117; 60.6%) did not.

Finally, although we could not statistically test for a relationship, research rigour reporting practices did not appear to vary over time.

Conclusions

Overall, the findings of this study were disappointing. Both quantitative and qualitative empirical work is not being reported in ISIC papers in ways that clearly demonstrate research rigour, nor assure replicability. On the whole these authors are not poorly trained, nor is their work of poor quality; however, their reporting of it is unsatisfactory. It could be that authors are not viewing the ISIC conference as a venue where methods need to be particularly detailed. Perhaps these oversights would be rectified if the conference papers were subject to a second review prior to publication. However, it is surprising that the initial review for the conference fails to ensure more consistently careful reporting of methodological rigour. In order for information behaviour scholarship to receive the respect and have the impact it deserves researchers in the field must practice the basic literacies, the ABCs, of research reporting, writing their papers so that readers can fully make sense of the work. 

About the authors

Lynne McKechnie is Professor in the Faculty of Information and Media Studies, The University of Western Ontario, London, Ontario, Canada. She received her PhD from the University of Western Ontario. Her research and teaching interests include the intersection: of children, reading and public libraries; information behaviour; and, research methods. She may be contacted at mckechnie@uwo.ca.
Heidi Julien is Professor and Chair in the Department of Library and Information Studies at the State University of New York at Buffalo. She received her PhD from the University of Western Ontario. Her research and teaching interests include information behaviour, digital literacy, and research methods. She may be contacted at heidijul@buffalo.edu.
Roger Chabot is a PhD candidate in Library and Information Science in the Faculty of Information and Media Studies at The University of Western Ontario, London, Canada. His research interests include religious information behaviour and the application of Buddhist philosophy and psychology to topics in LIS. He may be contacted at rchabot2@uwo.ca.
Nicole Dalmer is a doctoral candidate in the Faculty of Information and Media Studies at The University of Western Ontario in London, Ontario, Canada. She received her MLIS from the University of Alberta. Her research and teaching interests include intersections of family care work and information work within aging in place policy, and the development of public library services for aging populations. She may be contacted at ndalmer@uwo.ca.
Cass Mabbott is a doctoral student at the Graduate School of Library and Information Science, University of Illinois, Urbana-Champaign, USA. Her research and teaching interests include social justice and youth in public libraries, information behavior of young children, and the history of children’s literature. She currently is a graduate assistant working with The Comic Book Readership Archive, with Dr. Carol Tilley. She may be contacted at mmabbot2@illinois.edu.

References
  • Abad-Garcia, M., Gonzalez-Teruel, A. & Sanjuan-Nebot, L.  (1999). Information needs of physicians at the University Clinic Hospital in Valencia, Spain. In Wilson, T.D. & Allen, D.K. (Eds.), Exploring the contexts of information behaviour. Proceedings of the Second International Conference on Information Seeking, Needs and Use in Different Contexts (pp.209-225). London: Taylor Graham.
  • Allende, J.L. (2004). Rigor – the essence of scientific work, Electronic Journal of Biotechnology 7(1). Retrieved from http://www.ejbiotechnology.info/index.php/ejbiotechnology/article/view/1112/1494
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Thomson.
  • Claydon, L.S. (2014). Rigour in quantitative research. Nursing Standard, 29(47), 43-48.
  • Connaway, L.S. & Powell, R.R. (2010). Basic research methods for librarians (5th ed). Santa Barbara, CA: Libraries Unlimited.
  • Cope, D.G. (2014). Methods and meanings: credibility and trustworthiness of qualitative research. Oncology Nursing Forum, 41(1), 89-90.
  • Davies, D. & Dodd, J. (2002). Qualitative research and the question of rigor. Qualitative Health Research, 12(2), 279-289.
  • Federation of Societies for Experimental Biology. (2016). Enhancing research reproducibility. Retrieved from https://www.faseb.org/Portals/2/PDFs/opa/2016/FASEB_Enhancing20%Research20%Reproducibility.pdf
  • Finlay, L. (2006). ‘Rigour’, ‘ethical integrity’ or ‘artistry? Reflexively reviewing for evaluating qualitative research. British Journal of Occupational Therapy, 69(7), 319-326.
  • Foster, A. (2005). A non-linear model of information seeking behaviour. Information Research, 10(2).
  • Freese, J. (2007) Replication standards for quantitative social science. Why not sociology? Sociological Methods & Research, 36(2), 153-172.
  • Funder, D. C., Levine, J. M., Mackie, D. M., Morf, C. C., Sansone, C., Vazire, S. & West, S. G. (2014) Improving the dependability of research in personality and social psychology: recommendations for research and educational practice. Personality and Social Psychology Review, 18(1) 3–12.
  • Genius, S.K.  (2015). ‘The transfer of information is powerful’: interpersonal information interactions. Information Research, 20(1). Retrieved from: http://www.informationr.net/ir/20-1/isic2/isic29.html#.WCSYuC0rLIU
  • Guba, E.G. (1981). Annual review paper: criteria for assessing the trustworthiness of naturalistic inquiries. Educational Communication and Technology, 29(2), 75-91.
  • Hall, W.A. & Callery, P. (2001). Enhancing the rigor of grounded theory: incorporating reflexivity and relationality. Qualitative Health Research, 11(20), 257-272.
  • Hersberger, J. (2001).Everyday information needs and information sources of homeless parents. The New Review of Information Behaviour Research, 2, 119-134.
  • Houghton, C., Casey, D., Shaw, D. & Murphy, K. (2013). Rigour in qualitative case-study research. Nurse Researcher, 20(4), 12-17.
  • Jootun, D. & McGhee, G. (2009) Reflexivity: promoting rigour in qualitative research. Nursing Standard, 23(23), 42-46.
  • Kingsley, B. C. & Chapman, S. A. (2013). Questioning the meaningfulness of rigour in community-based research: navigating a dilemma. International Journal of Qualitative Methods, 12, 551-569.
  • Krefting, L. (1991). Rigor in qualitative research: the assessment of trustworthiness. The American Journal of Occupational Therapy, 45(3), 214-222.
  • Krippendorff, K. (2013). Content analysis: an introduction to its methodology. (3rd ed.). Los Angeles, CA: Sage Publications.
  • Leedy, P.D. & Ormrod, J.E. (2013). Practical research: planning and design (10th ed.). Boston, MA: Pearson.
  • Lincoln, Y.S. & Guba, E.G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
  • Lombard, M. Snyder-Duch, J. & Bracken, C.C. (2010). Practical resources for assessing and reporting intercoder reliability in content analysis research projects. Retrieved from http://matthewlombard.com/reliability/
  • Long, T. & Johnson, M. (2000). Rigour, reliability and validity in qualitative research. Clinical Effectiveness in Nursing, 4(1), 30-37.
  • Miles, M.B., Huberman, A.M. & Saldana, J. (2014). Qualitative data analysis: a methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage.
  • Pagan, A. & Torgler, B. (2015). Research rigour: Use ‘4Rs’ criteria to assess papers. Nature, 522(34), 34.
  • Pickard, A. (2013). Research methods in information (2nd ed.). Chicago, IL: Neal-Schuman.
  • Porter, S. (2007). Validity, trustworthiness and rigour: reasserting realism in qualitative research. Journal of Advanced Nursing, 60(1), 79-86.
  • Rigor. n.d. In Merriam-Webster. Retrieved from http://www.merriam-webster.com/dictionary/rigor
  • Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22(2), 63-7.
  • Society for Neuroscience. (n.d.) Research practices for scientific rigor: a resource for discussion, training, and practice. Retrieved from http://www.sfn.org/Advocacy/Policy-Positions/Research-Practices-for-Scientific-Rigor
  • Wildemuth, B.M. (2009). Applications of social research methods to questions in information and library science. Westport, CT: Libraries Unlimited.
  • Williamson, K., & Johanson, G. (Eds.). (2013). Research methods: information systems and contexts. Prahran, VIC: Tide Publishing and Distribution.
How to cite this paper

McKechnie, L., Chabot, R., Dalmer, N., Julien, H. & Mabbott, C. (2016). Writing and reading the results: the reporting of research rigour tactics in information behaviour research as evident in the published proceedings of the biennial ISIC conferences, 1996 – 2014 In Proceedings of ISIC, the Information Behaviour Conference, Zadar, Croatia, 20-23 September, 2016: Part 1. Information Research, 21(4), paper isic1604. Retrieved from http://InformationR.net/ir/21-4/isic/isic1604.html (Archived by WebCite® at http://www.webcitation.org/6mHhdW5Rc)

Check for citations, using Google Scholar