header
published quarterly by the university of borås, sweden

vol. 24 no. 4, December, 2019



Proceedings of the Tenth International Conference on Conceptions of Library and Information Science, Ljubljana, Slovenia, June 16-19, 2019


The fragmentation of facts and infrastructural meaning-making: new demands on information literacy

Jutta Haider and Olof Sundin.


Introduction. This paper presents a theory-driven discussion on the role of facts in society, couched between a brief historical overview and a discussion of the contemporary situation, exemplified in particular by openly available web-based fact services. Implications for the conceptualisation of information literacy – and in particular information literacy in relation to today’s dominant algorithmic information infrastructure – are considered throughout.
Method. This is a conceptual paper where theoretical reasoning is accompanied by examples from a small empirical material. This material consists of the use and observation of three web-based fact services as well as expert interviews with three producers and one user of one of the services. In particular Hannah Arendt’s essay “Truth and politics” is drawn on to contextualise and understand the role of facts in society.
Results. The web-based fact services investigated here facilitate and describe the creation of facts based on open data in a rather traditional way, i.e. by providing references and pointing to sources. However, the established facts are then inserted into today’s networked information landscape, which is an arena for competing knowledge claims working according to the market’s principles of popularity, and this leads to conflicting situations and poses new demands on information literacy.
**Conclusions.**This paper suggests the need for a view of information literacy that accounts for infrastructural meaning-making at the same time as it enables the political dimensions of the way in which facts and factual information are created and valued in contemporary society to be taken seriously.

Introduction

In contemporary society, almost everything can be quantified, measured, and visualised with user-friendly software. At the same time, almost everything can also be disproved, questioned, manufactured, and emotionalised, even, or probably especially, factual information. This contrariness has implications for how we think of facts as well as for the roles that facts are assigned in the public debate, from media reporting, to exchanges in social media, to the use of memes. And of course this has implications for how we conceptualise information literacy and in particular information literacy in relation to the information infrastructure our lives revolve around. Much can be said about this situation, but in this paper we focus on the intersection of two aspects of this extremely broad and growing field, namely a type of service for the production of facts on the web and the role of facts in society. For this purpose, we bring together different bodies of theoretical and historical considerations and bounce them off of empirical examples from web-based fact services. Thus, in this conceptual paper, we examine how the construction of facts is facilitated by openly available web-based fact services and link this to a theory-driven discussion of the relationship between facts and opinions. We further relate this discussion to a consideration of some current challenges for information literacy.

For this, we draw to a large extent on the classic work of Hannah Arendt exploring the relationship between truth, facts, and politics. Specifically, Arendt’s essay “Truth and politics” from 1967 – in which she developed her thinking about different types of factual knowledge, their emergence, and their use and shaping in relation to power and politics – helps us to shed light on the current situation. A situation that – in contrast to the society Arendt knew – is characterised by the algorithmically fuelled circulation of misinformation. We conclude this paper by discussing the concept of infrastructural meaning-making (Haider and Sundin 2019) as a way to frame the specific materiality of facts online, and we relate this concept to some of the challenges that information literacy faces today. However, we have no intention of providing stable, mutually exclusive definitions nor of discussing the philosophical nature of concepts such as facts, truth or data. Rather, we are primarily interested in how these concepts are referred to, used and put into action in various realms of contemporary discourse. Whether a fact is a true fact or what the difference between data and facts is therefore pertains to questions beyond the scope of this article.

In library and information science, the role of facts in society has been investigated in information literacy research, particularly in relation to information-seeking in schools (see Gardén et al., 2014 for an overview). This research has shown how pupils often tend to formulate information-seeking in school-related tasks as fact-searching (Todd, 2006) and how they distinguish between facts and opinions (Francke, Sundin and Limberg, 2011). This preference for fact-searching has been described as the outcome of an educational tradition that often emphasises the finding of correct answers to knowledge questions ( Limberg, 2007). The way in which general-purpose web search engines tend to translate more complex questions into simple ones has been shown to further this tendency (Sundin et al., 2017). It appears as if look-up searches seeking factual answers to direct questions dominate web-searching in school settings (Rieh et al., 2016). In this paper, we turn the table and investigate not how facts are searched for, but how they are presented by examples of services that are often called fact providers, and we relate this to a broader discussion of facts in society and of the construction of facts through the specific affordances of information systems. This paper is a contribution to a small but notable body of work in library and information science that studies dedicated digital information systems from perspectives foregrounding sociomaterial practices. These studies emphasise the need for awareness of the role of infrastructural arrangements for information literacy (e.g. Bruce, 1997; Johansson, 2012; Tuominen, Savolainen and Talja, 2005). The paper adds to this rare but highly interesting tradition in work on information literacy and continues the exploration of information infrastructures that the authors set out on in the book Invisible Search and Online Search Engines (Haider and Sundin, 2019).

A fundamental distinction regarding the purpose of different information systems is becoming increasingly blurred, namely the distinction between searching for facts and searching for documents in which facts can be found (see already Vickery, 1961). An early attempt at achieving a factualising of information by challenging this distinction can be found in Paul Otlet’s brainchild, the ‘monographical principle’. Otlet envisaged that this principle would supplant traditional publishing in favour of classification of facts themselves (Otlet, 1903 in Rayward 2009, p. 15). Most notably today, Google has moved in this direction. Specifically it has done so with the Knowledge Graph, which unites open data sources in dedicated info-boxes that are generated in direct response to a search. The search engine’s featured snippet function works to the same effect. In the case of Google, this points to a far-reaching trend away from providing users with links to resources and pointers to documents, such as web search engines or bibliographical databases have traditionally done, towards providing users with the fact itself or even with the tools to create their own facts. A special variety of fact services is particularly interesting in this context. These are web-based services that enable the bringing together of different open data sources in a common interface. This way they facilitate the creation of factual information by enabling users to combine variables from different datasets. Fact services, such as Knoema (https://knoema.com/), Factlab (https://factlab.com/), and Gapminder (https://www.gapminder.org/) re-package and harmonise open data and make it accessible and combinable in easy to use and – importantly – in graphically appealing ways ideal for sharing or using in presentations.

Establishing facts

The notion of fact, as we are familiar with it today, is intimately connected to the scientific revolution, and it was established in the second half of the 17th century together with a number of related concepts – most notably the concepts of evidence, theories, and hypotheses (Wootton, 2015). Facts have to be established; they do not simply exist. This is the case for demographic information as much as for social science data, historical events, and even for scientific facts (Davies, 2018). David Wootton (2015, p. 255f) reminds us how even things that we have come to consider as a ‘brute fact’, like the height of a mountain, are also established even if we rarely reflect on how it happened. Mt Everest, for instance, was named – in English – in 1865 and Wootton argues: ‘Finding and sharing facts about Everest required a naming process, a measuring process, a mapping process. Everest was there before 1865, but there were no facts about Everest before 1865’ (Wootton, 2015, p. 260). The standardization of the meter, the practice of how counting heights always starts from sea level, and the naming of the mountain are all part of the fact-producing machinery. Likewise, in order for us to be able to compare average life expectancy across populations, we first have to agree on a standard for registering births. Although facts like these now seem instinctive, at one point even these needed to be established. Wootton (2015, p. 257) writes: ‘[t]he social and technical process by which we establish facts becomes invisible to us because we naturalize it’ (p. 257).

This naturalisation, it should be added, often also includes processes of exclusion and can be connected to the violent history of colonialism, such as, for instance, the case with the very mountain now called Everest. It had names before it received its English name – it was and is still called Sagarmāthā in Nepalese and Chomolungma in Tibetan, and it even had an English name before being called Everest, and clearly the people living near it shared all kinds of information about the mountain. Indeed, excluding the mountains’ indigenous names is part of the original factualising, understood in this case also as an act of colonialism working in tandem with map-making. These names can then be added again as other names hierarchically subordinated to the universal Mount Everest. Such an understanding of facts chimes well with Geoffrey Bowker and Susan Leigh Star’s (2000) writing on infrastructures and standards, particularly in relation to categories. These are, as they elucidate in great detail, always socially created and tend to become invisible when they work smoothly.

Certain agreements have to be made concerning who can establish facts and what kinds of credentials are required for those to be trustworthy, that is, in order for the facts to be accepted as true. We have come to consider facts that are approved by a sanctioning institution to be the most obviously established. Statistics in the form of open data sets produced by international organisations such UNESCO, the WHO, or The World Bank, to name just a few, are excellent examples of how facts about demographics, and by extension facts about society, are established by institutions that select, standardise, and organise the selection of variables, the naming and definition of concepts that are used, and so on.

In order for something to be recorded, numbered, measured, or otherwise registered as a fact, we first need an awareness of a phenomenon’s existence and the tools and rules for recording it. Facts demand a specific kind of institutional and technical information infrastructure to be in place. Wootton sees a link between facts, the scientific revolution, and the emergence of print culture. He writes: ‘Before the Scientific Revolution facts were few and far between: they were handmade, bespoke rather than mass produced, they were poorly distributed, they were often unreliable’ (Wootton, 2015, p. 259). The printing press made it possible to turn ‘private experience into a public resource’ (Wootton, 2015, p. 302; see also Latour, 1986). In other words, the book, the article, and other artefacts of scholarly communication made facts more stable. They also made facts more mobile and more easily transferable. Here another development is relevant, namely the specific role of numbers. In her book A History of the Modern Fact, Mary Poovey (1998) highlights how the modern understanding of facts conveys a necessary epistemological separation of numbers and interpretations:

The modern fact finally emerged as a theorizable component of knowledge production only as an effect of two related developments in the history of epistemology: what looked like or could be presented as the complete separation of observed particulars from theories, and the elevation of particulars to the status of evidence capable of proving disproving theories.

Poovey 1998, p. 92

This distinction is fundamental for appreciating how facts are thought of in contemporary society and thus also for how web-based fact services work. Indeed, the separation of observed particulars from theories that Poovey (1998) pinpoints, is even more prominent in relation to contemporary information infrastructures, where digital data are effortlessly communicated between databases, leaving much of the interpretation to the user. At the same time, it is obvious that web-based fact services, just as any other tool for accessing information, are built with certain assumptions in mind as well as with technical restrictions and standards in place. In the same way as Bowker and Star (2000) argue for how classifications are built into infrastructures, variables and selected statistical units be can also be seen as built into web-based fact services.

Facts and opinions

In addition to the above description of the concept of the scientific and numerical fact, there is another way of thinking about facts that adds a layer of complexity by embedding facts in a wider, political context and by positioning them in strategic opposition to opinions. This broader notion is more clearly linked to truthfulness and evidence as part of how facts are conceptualised. It is furthermore informed by how facts are put to use in politics and thus helps elucidate their role in the public debate. In her famous text “Truth and politics”, Hannah Arendt (2006 [1967], p. 225) asserts that ‘the story of the conflict between truth and politics is an old and complicated one’ before she moves on to discussing the distinction between rational (or logical) truth and factual (or empirical) truth. Rational truth relates to logic and mathematics, while factual truth relates to empirical, observable phenomena, which can be referred to as evidence. For instance, historical events or the weather as we register it and similar belong to the latter. These are perceived as factual truths, and they are what we commonly call facts. Arendt’s concern is predominantly with this type of factual truth, and so is ours. Arendt (2006 [1967]) also notes how ‘factual truth, if it happens to oppose a given group’s profit or pleasure, is greeted today with greater hostility than ever before’ (p.231). This statement, made in the 1960s, verbalises a trend, which if anything has intensified since then. Arendt stresses that in order for ‘unwelcome factual truths [to be] tolerated in free countries they are often, consciously or unconsciously, transformed into opinions’ (p. 232).

In contemporary political debate, this hostility towards disagreeable facts is often solved by ideologising the fact in question, consequently disarming it. The best known, but by no means only examples, are the way in which scientific facts about climate change are increasingly associated with being left-wing and how vaccinations are seen to mainly work in the commercial interests of the pharmaceutical industry. Facts and opinions are distinct, but they are also related in many ways. ‘Unwelcome opinion can be argued with, rejected, or compromised upon, but unwelcome facts possess an infuriating stubbornness that nothing can move except plain lies’, writes Arendt (2006 [1967], p. 236). The concept of the alternative fact popularized in 2017 by Trump campaign manager and adviser Kellyanne Conway can be seen as a recent example of this approach. It was employed to oppose what Arendt would call a factual truth, namely the number of participants at the president’s inauguration ceremony, thus turning a number into an opinion – not the basis to formulate an opinion with, but the opinion itself.

Once a fact is transformed into an opinion, it is easy to dispute its reliability. Considering that freedom of opinion is an unquestioned public good in liberal democracies, transforming a factual truth into an opinion is normally much more practicable than declaring the fact to be wrong. If, for instance, the role of carbon dioxide in global climate change is seen as a matter of opinion, and several different opinions can co-exist, then the opinion that ultimately poses a threat to the status quo is open for competition on the marketplace like everything else in capitalist society (Davies 2018, p. 159-165). Today’s networked information infrastructure is optimised for the algorithmic amplification of opinions, and this further fuels this competition. If what the statistical representation of a demographic trend shows is merely an expression of left-wing, right-wing, or religious opinion, dismissing it is not just easier, it is what is expected. It means moving to a different playing field, one that is served very well by the reward system of today’s networked and platformised information infrastructure. Arendt (2006 [1967], p. 231) also notes, quoting 19th-century US President James Madison (1809–1817), how ‘strength of opinion’ depends upon the number of people having the same opinion. Here an interesting parallel can be drawn to social media and search engines, where this is expressed in number of likes, shares, mentions, or in-links. It could be argued that these information systems treat facts and opinions the same way, a course that likely contributes to and even automates the transformation of facts into opinions.

Two developments are currently taking place in society at the same time, and both, we argue, have to be considered by paying close attention to today’s predominating information infrastructure. On the one hand, a strong data discourse, which equates data and facts and maintains the pre-eminence of (big) data over theory, is sweeping over society. On the other hand, the above-mentioned turning of facts into opinions contributes to stoking an emotionalisation of the public debate. It is within the context of this partly conflicting situation that we suggest that the move towards open data and the availability of fact services such as the ones discussed here has to be understood. William Davies notes,

Big data shares one thing in common with traditional statistics, in that both are numerical, but the political differences are stark. Traditional scientific societies /…/ and national statistics agencies involve a small group of experts producing knowledge that is then made available to the public. /…/ But with big data, things are effectively the other way around: the mass public are generating knowledge all the time with their search queries, movements, and Facebook statuses, which is then made available to a small group of experts.

Davies, 2018, p. 186

Open data of the type used in web-based fact services originate in the traditional understanding of scientific societies and statistics agencies. However, when shared, commented, liked, searched for, or otherwise circulated in social media or search engines such data are inserted into the big data-producing and processing machinery of these platforms. In this way, the statistics so foundational for modernity and the bureaucratic state are translated into a late modern commodity of affect.

Web-based fact services

Web-based fact services are located at the point of intersection between the modern and late-modern understanding of data and of how data relate to facts. The three web-based fact services considered here – Knoema, Factlab, and Gapminder – have many similarities. They all provide their users with facts in the form of numbers displayed as vector or bubble graphs, or similar, and they all use open-data sources provided by established national and international organisations, NGOs, and businesses from around the world. The services collect data sets complete with variables, meta-data, and indicators from open-data sources. They harmonise these data sets in order for users to be able to access and combine them from within a single interface. In some cases users can combine different variables and relate them to each other. Typically the services also support the export of graphic visualisations for use in presentations or for sharing. What once was a demanding task, namely selecting, collecting, and comparing different types of facts and creating visual representations, can now be done in seconds through these services. However, there are also many differences between them, not least concerning how they are financed and regarding their topical scope as well as how well known they are. A search on Google Trends in May 2019 showed that Gapminder is the most searched for and that searches for Factlab are more or less restricted to Sweden. Yet even in Sweden Gapminder dominates, which it does worldwide with the exception of Saudi Arabia, Russia, and Malaysia, where Knoema appears to be better known. In Turkey and India, Google searches for Knoema and Gapminder appear to be more even.

Together these three services represent a good cross section of how this type of open statistical data is put to use and marketed to the public, policy makers, and the business sector. We start out by presenting Knoema, which is the most clearly business-oriented service. This is followed by discussing the service provided by the Gapminder Foundation, which has a clear mission to advance a certain type of knowledge in an attempt to shape policy-making and the public perception of issues. We conclude by presenting Factlab, which also has a business version of its service, but with an open version that mostly targets the educational sector as its user group. We have also interviewed the producers of Factlab (two interviews with three people) as well as one expert user of the service. The interviews were semi-structured and digitally recorded.

Knoema

Knoema is a US-based company acting internationally. It operates a largely subscription-based service, but with a freely usable light version. Knoema describes itself as providing ‘the most comprehensive source of global decision-making data in the world’ (Knoema, n.d. a) on its web page and in very similar terms on its Facebook page. On their Twitter page, they add: ‘Free public data, visualizations and knowledge’ (Knoema, n.d. b). In its self-description, Knoema refers to data, knowledge, and decision-making rather than facts. Yet, what Knoema provides is a type of data that in the contexts of the other two services here discussed is referred to as facts. When reference is made to Knoema or its data is used – often by journalists – this is done in relation to facts or fact-checking. According to its website, ‘Knoema hosts more than 2.4B time series published by more than 1K sources. This ever-evolving database is curated by our team from authoritative sources to help capture emerging social, economic, financial, political, and industry-specific topics and trends’ (Knoema, n.d. c).

There is a free version of Knoema, a professional version, and a business version. The free version can be used to present descriptive statistical data in visual graphs, and in this version Knoema is primarily a type of statistical encyclopaedia. Knoema has a chat bot named Yodatai as a ‘digital data assistant’, which mainly summarises basic information about data sets and the results of simple searches. When following the link to the assistant, a new window opens up:

Hi, I am Yodatai and I will be your digital data assistant. You can call me Datai too or even Yoda if you are Star Wars fan. May the data come with you!
Here is how I can help you:

  • You can ask me questions like "What do you know about California?" or "What is the inflation rate in USA?" I'll be back to you immediately.
  • I have data on all world's countries and their regions, various industries and topics and much more. You can always ask me "What data you have on ..?
(Knoema, n.d.)

In order to see the source of a certain piece of data and to get a link to the original dataset, you have to be logged in (it is free to open an account). The professional and business versions have more options to compare data, and they also offer the customising of data sources and the integration of open data with business-specific data. According to Knoema, it is possible, for example, to ‘[v]isualize and compare data with over 20 chart, graph, and map options’ and to ‘[m]anipulate datasets by selecting variables for analysis and comparison’ (Knoema, n.d. e). The way the professional version is presented in a short video on its web page makes it clear that the company imagines its users to primarily come from the business sector. Knoema has a significant number of spinoffs and related projects, for instance, a smartphone application that crowdsources data collection in certain areas against micro payments, country-specific applications, and various “situation rooms” for areas undergoing an emergency, such as an Ebola situation room and similar. What is interesting is not only the significance that is attributed to sources and the authority of the originator of a data source in the self-description of the data, but also how the size and extensiveness of the available data is emphasised as a feature.

Gapminder

The second example we want to mention here is Gapminder, which is actually more than a fact service. Gapminder is a foundation with a mission. It sees itself as ‘[f]ighting devastating ignorance with fact-based worldviews everyone can understand’ (Gapminder, n.d. b). According to their website, Gapminder

promote[s] a new way of thinking about the world and the society which we call Factfulness. It is the relaxing habit of carrying opinions that are based on solid facts. (Gapminder, n.d. a)

Factfulness is also the title of a bestselling book authored by the late Hans Rosling, founder of Gapminder, written together with his two children. In 2017 all US college graduates were given access to the book for free courtesy of Bill Gates.

Gapminder develops tools and methods for the visualisation of population-related statistics. Gapminder is most famous for its bubble graphs as presented by Hans Rosling in his well-publicised popular science presentations on the issue of global development and specifically poverty. These graphs were produced with the visualisation tool Trendalyzer that was sold to Google in 2007. Trendalyzer allows for the combination of hundreds of indicators coming from a huge number of open data sources. What is unique for Trendalyzer is the way in which change over time is expressed through movement (see Johansson, 2012 for a detailed analysis of Trendalyzer as a social visualisation tools). Most recently, Gapminder has made available the Dollar Street project. This is a tool that lets users compare living standards and related statistical data across countries on all continents, all illustrated, according to Gapminder, by photographs of real people and their actual environments:

Since many people hate statistics, we use photos of data to give the numbers meaning. We have sent photographers to 240 homes across the world to show how people really live. That’s what we call a fact-based worldview. (Gapminder, n.d. c)

Illustrating statistical data with photographs might be provocative for statisticians, but in this case it should be seen as an example of finding new ways of representing numerical data in order to make it more relatable. Various other tools focusing on trends, bubbles, ages, ranks, and maps also let users combine variables from different data sources in order to see changes over time. Clicking on a question mark next to an indicator leads directly to the web page of the organisation where the respective data originates. Gapminder’s focus is on global development, and for this purpose they have collected 400 indicators from open data providers such as the World Bank, the WHO, and national statistics agencies.

In contrast to Knoema, Gapminder has a focus on facts related to health, medicine, and development. It actively promotes a vision of enabling ‘a fact-based worldview’ and encourages a positive image of the direction of human development to date. The next example, Factlab, makes a point of only providing access to facts, leaving it to their users to create sense and meaning of them.

Factlab

Factlab is a company located in Sweden, and its service is available in Swedish and English as well as in several other languages. It brings together and harmonises open-data sources using an approach that they call the ‘omni method’, but which is otherwise kept secret. Factlab’s main user group for its freely accessible general-purpose version can be found in schools. On its homepage, the following can be found:

More than 48,000,000 facts from more than 400 different statistic databases from the most trusted sources in the world, together in one database, in one format, in one web-application with tools to analyze, compare and combine data of your choice, all done in your preferred web browser’

Factlab, n.d. b

The company offers a general-purpose version of Factlab for free, but also provides tailor-made business solutions that are password protected and with individualised statistics for the company in question. Here, we only refer to the freely accessible Factlab. On its Facebook page (Factlab, n.d. a), they proclaim that they ‘deliver[s] truth and knowledge through the internet’, and they describe their product as providing ‘Your Knowledge, Your Truth’. Furthermore, ‘[t]he factlab presents no theories or conclusions, just facts. It is up to you to experiment and to draw your own conclusion from the facts you look at’. Such an understanding chimes particularly well with what Pooley (1998) refers to as the necessary separation between facts and interpretation as a prerequisite for the modern fact. Here, facts become something personal at the same time as they are generalisable. When interviewing the persons behind Factlab, this insistence on the separation of facts from theories or interpretations comes across clearly:

We make no analysis of this at all, we just provide it. And we try to be totally neutral and totally non-political and totally non-religious. We attach no value at all, but it is up to the users to experiment by themselves and to draw their own conclusions and to arrive at their own truth, their own knowledge.

The colleague, who was also present at the interview, adds, ‘We provide the fact, as secure as possible. Then it is up to the user to do the rest... But we expose also everything about them, all the way back to them.” With “back to them’ they refer to referencing the original sources. Time and time again, the two interviewees return to the importance of sources and of making them visible for the user. Some of the open data sources, such as The World Bank, the CIA’s Factbook, or the WHO are so substantial in size that a selection among the datasets they provide must be made. In the interviews, Factlab’s producers emphasised that they never change the numerical values provided in the datasets. However, they occasionally calculate variables, for example, per capita, in order to make comparisons possible. In these cases, a note for the user is made that Factlab has calculated this specific variable. Harmonising data sets with different origins also comes with difficulties that stem from the legal, political, administrative, or similar circumstances the data relate to. For example, they note:

But it is not just the language that is different from country to country, but also the classifications. If you look at Germany, so they have their Bundesländer, and they have many different ones. It’s quite simple in Sweden with its municipalities, even if they change. Norway is similar. But at least they exist. England is difficult, they have their shires, and they have counties, and they have all kinds of things. It’s important to find a standard, which can….

Establishing this standard, which enables the combination of datasets with different classifications for describing similar phenomena in a common interface, requires harmonising and a degree of ‘human touch’. This human touch does not concern not the numerical values, but rather the selection of sources and the technique used to enable the comparing of variables that were not originally developed for comparison.

Discussion and conclusion

What we see in the case of these three different services and how they facilitate and describe the creation of facts based on open data is in many ways a fairly traditional approach to how the factualness of something is established, namely by providing references and pointing to sources. This is not unlike the way in which traditional scholarship and referencing practices are drawn upon to establish claims in Wikipedia (Sundin, 2011) (and how a text like this scholarly paper also works). Yet, what does this mean when the facts established in this way are then inserted into today’s networked information landscape, which is an arena for competition of knowledge claims working according to the market’s principles of popularity?

Creation of facts by web-based fact services

In web-based fact services, facts are not just represented by numbers; they are numbers, or more precisely, numbers turned into images in the form of sequences of numbers often displayed in graphically convincing ways. Interpretation is purposefully left out of the service. Gapminder is something of an exception, with its goal of supporting human development shaping the platform and with the late Hans Rosling as its iconic, idealised user. Yet in the cases of Knoema and Factlab, all interpretations are left to the users, which is also the point of the services. That is, the responsibility for the construction of facts by relating variables to each other is assigned to the individual user. In this way, the contemporary fact, as constructed and made accessible by web-based fact services, is at the same time both personal and a part of the infrastructure of society. This is especially obvious when Factlab promotes its service as enabling you to make ‘your truth’. These users acting as fact-builders are most likely unaware of the local conditions for collecting and compiling the data and for dividing and classifying it in certain ways in the first place. Instead, the technical conditions facilitating the merging of different data sources on one interface might have a role to play, but those too are largely hidden. Facts and categories, as we see in the three fact-services presented above, are created in relation to each other, and facts gain meaning in relation to the variables, fields and, technical categories adapted by the developers’ ‘human touch’ that enable such facts to be recorded.

Facts and their sources

As Arendt (1972, p.6) notes, “Facts need testimony to be remembered and trustworthy witnesses to be established in order to find a secure dwelling place in the domain of human affairs.” In web-based fact services, the need for witnesses is addressed by providing sources, and all statistics and numbers are linked back to their original sources. This approach goes back to the establishment of the footnote during the scientific revolution as a means to compile and trace erroneous claims. ‘This struggle against error brought into being the footnote: the mechanism for ensuring that every fact could be traced to an authorizing statement’ (Wootton, 2015, p. 305). Source-tracing is still necessary, and is maybe more necessary than ever, in order to provide authority to facts and to solve the issue of witnessing. We see this in Wikipedia, with its demands on citing sources, but we also see it in web-based fact services, and even Google’s Knowledge Graph has a note stating what constituted the source of the data being displayed. Links to original sources contribute to the authority of the fact as well as the production of the fact as such.

The way in which original sources are highlighted in web-based fact services replicates traditional practices of citation in the scientific literature. And it seems clear that users should address them with traditional approaches to the evaluation of sources, as advocated in the various literacy checklist approaches, and should go back to the source in order to assess its credibility (when was it collected, by whom, and so on). There is of course merit to this; however, it also has the effect of fragmenting the factualness of a statement rather than welding it together. The actual creation of factuality happens to a substantial degree in the image, in the graph, and in the compiled numbers that bring together the various sources. And here the meaning comes not only from the sources of the data in question, the World Bank, Amnesty International, the CIA, and so on, but also from the way in which the harmonised database works and how everything is joined together on and through the interface. Moreover, we argue that meaning to a considerable degree also derives from how this particularly created image facilitates its own integration into the networked information infrastructure of everyday life and public debate. An illustrative example could be how recorded sexual offences, most notably rape, are compared across countries, in form of readily shareable, numbered lists without a discussion of laws, definitions, and cultural implications or of how the data are collected, counted, classified or otherwise accounted for. Metaphorically speaking, the data are cut from their context of creation combined with other, equally decontextualized data and pasted into the users’ cultural environments, where new meanings are created and most importantly where factualness accrues in relation to these new meanings.

Facts in the context of information literacy

The above relates to the shareability of an image, the searchability of a claim, and so on, which in turn makes the image or claim more credible because, quite simply, it becomes more known. For this, a different type of meaning-making is required. We have previously suggested the notion of infrastructural meaning-making for this purpose because, in addition to taking apart the various data entries or other types of information (e.g. books, articles, websites, tweets …), it also helps to understand how they are brought together and how meaning is created in the affordances and practices of this bringing together (Haider and Sundin, 2019). Pieces of information are also given meaning in how they are accessed and algorithmically prioritised by social media and search engines. Why do we see what we see when logging into Twitter? How can we interpret the results of a Google search? What are the implications of the ordering algorithms involved for our understanding of a particular topic? In order to understand data in their proper context, what is important to consider is not only where they come from, but also how they are made into something new and where they might go. Specifically, how data are turned into something that in itself can circulate in an algorithmically fuelled information infrastructure with the potential to turn traditional statistical data into a source for big data has to be a central concern for the framing of information literacy.

William Davies (2018) distinguishes between representation and mobilisation of science in a way that can easily be translated to the context of our arguments: “Phoney science can be demolished. The problem is, it takes time. Where one side is involved in a project of representation, seeking to create the most accurate records and images of climate with immense care, and the other is involved in a project of mobilisation, seeking to win a battle over public sentiment, the former becomes very vulnerable” (Davies, 2018, p. 165). He continues:

The alternative perspective will eventually be shown as phoney, but often it is too late. In the case of climate change, it might be far too late. Regardless, knowledge has become a matter of timing and speed, in a way that was inconceivable 350 years ago when our ideals of scientific expertise were established’ (Ibid)

Arendt (1967, 1972) talks in a similar way about the significance of image-making. Statistics, such as the ones used in web-based fact services, are usually compiled for representation, but through image-making they are used for mobilisation. Its dual use makes statistics in this context modern and late modern at the same time. In addition to understanding the social creation of categories and being able to apply criteria for evaluating sources, a perspective is needed that accounts for the actual ways and infrastructural possibilities in which facts are put together and put to use for mobilisation. The role of facts in contemporary information infrastructure is profoundly complex and paradoxical. There is not one simple solution. In this paper we point to one aspect: a need for a view of information literacy that accounts for this specific infrastructural meaning-making at the same time that it enables us to take seriously the political dimensions of the way in which facts and factual information are created and valued in contemporary society. At the same time, we recognise that information literacy is not and can never be an individual task alone, but it is also a social undertaking, including societal responsibility and not least it is a matter of trust. However, this is an aspect we had to leave out in this paper.

About the author

Jutta Haider is Associate Professor in Information Studies at the Department of Arts and Cultural Sciences, Lund University, Sweden. Her research interests concern digital cultures' conditions for production, use and distribution of knowledge and information. This includes research on environmental information and on knowledge institutions, including encyclopedias, search engines, and the scholarly communication system. She can be contacted at jutta.haider@kultur.lu.se.
Olof Sundin is Professor in Information Studies at the Department of Arts and Cultural Sciences, Lund University, Sweden. Before starting at Lund University, he worked as a researcher and lecturer at the University of Borås. He has also been a part time professor at the University of Borås. In his current research he mainly focuses on the credibility of digital information, digital cultures, information seeking and learning, participatory media, and knowledge building. One issue that interests him is how established knowledge actors - such as schools, libraries, encyclopaedias, and publishers - handle the challenges of new media and people's changing expectations that come with it. He can be contacted at olof.sundin@kultur.lu.se.

References


How to cite this paper

Haider, J. & Sundin, O. (2019). The fragmentation of facts and infrastructural meaning-making: new demands on information literacy. In Proceedings of CoLIS, the Tenth International Conference on Conceptions of Library and Information Science, Ljubljana, Slovenia, June 16-19, 2019. Information Research, 24(4), paper colis1923. Retrieved from http://InformationR.net/ir/24-4/colis/colis1923.html (Archived by the Internet Archive at https://web.archive.org/web/20191217174233/http://informationr.net/ir/24-4/colis/colis1923.html)

Check for citations, using Google Scholar