header
published quarterly by the university of borås, sweden

vol. 27 no. Special issue, October, 2022



Proceedings of the 11th International Conference on Conceptions of Library and Information Science, Oslo Metropolitan University, May 29 - June 1, 2022


Facts and arguments checking: investigating the occurrence of scientific arguments on Twitter


Antonella Foderaro, and David Gunnarsson Lorentzen


Introduction. A method for studying use of scientific sources in arguments on Twitter is demonstrated.
Method. Data were collected from the Twitter API v. 2.0 using Focalevents, searching for tweets with links to DOIs, and then collecting conversations around these tweets. Analysis. Three conversations on different topics were analysed searching for argumentative behaviour, use of scientific sources, their reliability, consistency and adequacy in relation to the argument and the target audience. Both quantitative and qualitative content analysis based on argumentation theory were applied.
Results. The method allowed us to identify scientific publications used argumentatively by a multiple audience in the context of Twitter conversations. The publications were used to build scientific arguments, mainly, but not exclusively, from individual and collegial expert opinion. Scientific findings were often misinterpreted and used improperly to the benefit of the argument.
Conclusions. Through the use of argumentation theory to study conversations in a structured way, the paper demonstrates how to approach the usage of scientific publications in arguments. Scientific publications were used to build scientific arguments from different types of expert opinion, for giving proofs for claims and counter-arguments, and inconsistent or biased arguments from individual expert opinion.

DOI: https://doi.org/10.47989/colis2230


Introduction

Social media platforms, such as Twitter, allow for science dissemination and communication (Holmberg and Thelwall, 2014; Haustein, et al., 2015; Habibi and Salim, 2021), to discuss scientific outputs (Lorentzen, et al., 2019; Park, et al., 2021; Wang, et al., 2021 ; Goodwin, 2020 in the presence and with the involvement of a multiple audience (Palmieri and Mazzali- Lurati, 2021; Foderaro and Lorentzen, in press). Approaching these conversations argumentatively gives insights into how science is used for justifying one’s opinions and choices in attempts to persuade people with different views and values.

Due to the informal style of communication adopted by the users, sketches (Ballantyne, in press) and implicit arguments are common on Twitter and their formulation is often entrusted or reinforced through the use of external sources (Foderaro and Lorentzen, in press). We define the specific practice of linking scientific publications in digital interactions to support one’s position as the scientific argument. With scientific argument we intend the usage of these documents either to build an argument from authority (Wagemans, 2011) or to provide proofs based on evidences, reproducibility of experiments and results in order to justify premises, claims and conclusions or to reject an adversarial argument (Toulmin, 2003).

As a consequence of the limitations of its API, Twitter conversations have not been studied extensively (Lorentzen, 2021; Lorentzen and Nolin, 2017; D’heer, et al., 2017). However, the Twitter API v2 (Twitter, 2022c) has made collecting conversations much easier as tweets are tagged with a conversation ID (Twitter, 2022b). These changes are likely to shift focus from broad patterns of hashtag usage and a limited set of users to how people discuss issues on the platform. Many of the limitations outlined by Lorentzen and Nolin (2017) can now be circumvented, as data do not have to be collected in real-time and data collection is not limited to a set of users. This enables a more in-depth approach to analyse complex Twitter interactions. A digital object identifier (DOI) can be looked up using the Twitter API to collect the tweets linking to the document. Their conversation IDs of the tweets can subsequently be used to collect the entire discussions in which a DOI is used. This makes is possible to gain insights into how scientific articles are used as part of arguments and how they are perceived in the Twittersphere.

The purpose of this paper is to demonstrate and discuss how Twitter conversations based on scientific outputs can be studied. As a case of study, we select three conversations on different topics making use of scientific sources. The topics are climate change, abortion and veganism. The topics are of public interest and may also involve domain experts. Our research questions are:

Approaching digital interactions argumentatively is relevant as it allows for a deeper observation of statements and their soundness (van Eemeren, et al., 1997. Using both quantitative and qualitative content analysis based on argumentation theory, this paper aims to contribute with knowledge about how to study the use of scientific publications for building arguments on social media. This gives insight on how science can also be employed as a tool to reach consensus based on alleged authority.

Theoretical background

As Wagemans (2016a) explains, the interest in scientific argumentation is quite recent. This raises the question of how it occurs in digital environments and how the argument from expert opinion should be considered. What makes the issue problematic is, on one hand, the nature of scientific findings which are recognised as authoritative until new evidence is acquired. Therefore, the argument from expert opinion cannot be considered scientific per se but needs to be scrutinised to verify if it is still true or actual. Rapidly changing evidence in some scientific fields may increase uncertainty about the notion of expertise and make trust in science and authoritative sources challenging for the public at large (Caniglia, et al., 2021).

On the other hand, such an argument invokes an element that comes from outside the statement itself. That is why it is commonly considered weak or fallacious (Wagemans, 2016b, p. 6). However, as science is based on evidence and experiments that even scientific authorities need to provide in order to obtain recognition and publication of their findings, the question that arises is if the practice of using science for building arguments follows this rule. Answering this question, while not solving the second problem (logical), should instead guarantee scientific reliability.

Wagemans (2011) discusses and defines after a rigorous comparison with the scheme proposed by Walton (1997, 2006), crucial concepts for the identification and evaluation of expert opinion from an argumentative perspective, such as who is to be considered an expert, how this kind of argumentation is to be understood and through which theoretical tool it can be assessed.

Briefly, an expert is defined as ‘someone of whom the arguer believes the addressee to put a certain intellectual trust in’ (p. 331); the authority can be acquired in different ways, for example from invested opinion (de iure, i.e., right to exercise command), from professional expert opinion or from experiential expert opinion (de facto, i.e., cognitive, epistemic); finally, for argumentation from expert opinion, the author intends an ‘argumentation that renders an opinion (more) acceptable by claiming that the opinion is asserted by an expert’ (p. 331). However, there is a misalignment between the concepts identifying the expertise and what leads to its acceptance, because the latter rely on the intellectual trust placed on the speaker (Wagemans, 2011). To address the gap between the objective nature of the concepts and the subjective criterion of their acceptance, we suggest to shift the focus from the recognition of the authority of the sender, to the rigor of the message. According to Wagemans (2019), the argument from authority has a clear structure. The author describes its form as q is T, because q is Z making a simple example: ‘We only use 10% of our brain (q) [is true (T)], because [we only use 10% of our brain (q)] was said by Einstein (Z)’ (Wagemans, 2019, p. 64).

To study how scientific arguments are built on Twitter however, we need to take into account some typical aspects of social media. This includes the technicities of the platform, which shapes how content can be posted, the interaction among the participants and the filtering of content and participants (Niederer, 2019). On Twitter, content is limited to 280 characters (with mentions of users and included hashtags and URLs not counted), multiple users can be addressed in a tweet and popular content and users are made more visible by the platform. Adding to this, scientific publications referred in Twitter discussions are more often open access, with a clear skewness towards recent papers (Nelhans and Lorentzen, 2016). A platform such as Twitter allows users to create anonymous or false profiles, to hide or to show personal/professional information, to write only short messages and to link different kinds of scientific sources where only a section of the contents is accessible. This makes it difficult to immediately assess the arguers’ scientific arguments, their expertise on a particular field, reliability and finally to verify the source. Moreover, in group interactions, some argumentative behaviour comes into play as like-minded people converge in helping each other, despite the contested and proven falsehood of a statement or proof, reinforcing the perceived acceptability of one’s claims (Foderaro and Lorentzen, in press). Finally, a scientific argument proposed by a popular user could have a greater resonance and force of persuasion than an argument or proof shared by an expert in the field (Gruzd, et al., 2021).

What can somehow help in evaluating scientific digital interaction is the methodology utilised by the arguers in justifying their claims. According to Wagemans (2016a, p. 97), an expert ‘generates argumentative patterns’ when debating with peers in the same field. However, on social media, due to the nature and the limitations of the platforms and the presence of a multiple audience, this expert-to-expert interaction (Wagemans, 2016a, p. 98) does not strictly follow all the rules required in formal scientific argumentation because an expert-to- general public communication (Frezza, 2016) is more suitable and therefore only traces of the patterns can be found. As Nelhans and Lorentzen (2016) explained, discussing science on Twitter does not imply that the conversation is scientific in its nature. These traces of argumentative patterns though, such for instance deductive and inductive reasoning, coherence and consistency, adherence to empirical content (Wagemans, 2016a, p. 106), associated with technical language and information sharing practices (Pilerot, 2012), are of fundamental importance as they allow us to identify the typology of the debating audience.

Zenker and Yu (2020) implemented the traditional argument from authority introducing new types and further distinguishing between sources and modes of authority. Considering that the content and the boundaries between the different sources of authority on Twitter are often ambiguous (a user can be rich, famous, powerful and wise, etc.), this scheme can be adapted by letting the linked sources express what grounds the authority according to the arguers. In this way, the ambiguity of the analysis and the bias of the analysts should be substantially reduced.

Related work

Since debating scientific outputs on social media is increasingly common and influences people’s opinions and choices, the research within the area has grown. Focusing on how Italian media and cyberspace dealt with health information, Lavorgna and Di Ronco (2018) discovered ambiguity in the disseminated messages due to lack of journalists’ expertise. Approaching vaccination discussion on parenting blogs, Jenkins and Moreno (2020), found out that 25% of comments provided inaccurate health information. On Twitter, Gruzd et al. (2021) investigated types of disinformation evoking scientific authority. Their findings confirmed that misinformation supported with scientific information had more and longer diffusion on the platform than updated scientific evidence. These misleading tweets were usually tied with prominent users having media or political affiliation. Studying the relationship between political affiliations and diet-related discussions, Karami et al. (2021) brought to light differences in dietary preferences related to the political orientation of the state to which the users belonged. Relationship within geographical, terminological and opinion-related expressions about climate change was also observed by Bennett et al. (2021). Moernaut et al. (2020), found out that rhetorical strategies aiming to delegitimise logic and reason and to trigger emotional responses on climate change discussions, increased polarisation. On the same topic, Foderaro and Lorentzen (in press) highlighted how arguers used research and other science related information to justify and reinforce their positions. Even though there were clear signs of scientists participating in the debate, some scientific sources were taken out of context. This practice was found to be more common among human caused climate change deniers. Finally, while there was a lack of consensus in discussions about vaccination, there were some indications of reasoned interchanges involving bridging participants (Lorentzen, 2021).

Our approach differs from the above mentioned studies because we do not aim to evaluate arguers’ authority in relation to their impact on other users (Gruzd, et al., 2021; Moernaut, et al., 2020; Lorentzen, 2021). Moreover, the recognition of and trust in academic authority in digital environments (Francke, 2021) is not exclusively related, as has been proved, to scientific expertise (Rieh and Belkin, 1998). Our intention is instead to delve into the practice of using scientific articles to build authoritative arguments. Understanding how scientific findings are used in social media may help in providing tools to assess relevance and acceptability of means of persuasion, consistency within facts and arguments, and their adequacy to the intended audience (Dusmanu, et al., 2017; Foderaro and Lorentzen, in press).

Our framework allows us to discern scientific arguments built on expert opinion from arguments based on alleged or pseudo-scientific authority, by evaluating the message and not the arguers, i.e., first the argument, then the source and finally both in the context of the conversation. Analysing how such arguments are shared or reused, makes it possible to detect dubious means of persuasion on a large scale.

Method and data

Data were collected using Focalevents (Gallagher, 2022). Focalevents is a command-based Python tool that works with the Twitter API v. 2.0 as long as the user has an academic developer account (Twitter, 2022a). We used the query url:“https://doi.org/” with the Search API, set the start date to September 10 2021 and the end date ten days later. This resulted in 30,206 tweets, all including the chosen base URL. We then completed the dataset looking up all the conversation IDs. The API returned 424,187 new tweets, totalling 454,393 tweets, with the first one posted on March 14, 2011 and the last one on September 20, 2021. The differences compared to the previous API are significant. For comparison, Nelhans and Lorentzen (2016) collected 15,731 tweets during one month and Lorentzen et al. (2019) collected 29,796 tweets during two weeks, both through a more extensive set of search terms, including DOI URLs.

We discovered five conversations with more than 10,000 tweets, ten with 5,000-9,999 tweets, 46 with 1,000-4,999 tweets, 34 with 500-999 tweets, 144 with 100-499 tweets and 288 with 30-99 tweets. All in all, the presence and extent of the conversations is surprising considering research made with the previous API. From the collected conversations, we selected three conversational threads in the mid-range with regards to length. In the first thread, which consisted of 335 tweets and involved 44 participants, climate change was the main topic. The second thread was about veganism and vegan diets. Here, 304 participants posted 724 tweets. In the third thread, abortion was the main topic. This thread was the longest of them with 834 tweets posted by 434 participants.

A conversation can be viewed as a network resembling a tree structure. The root is the first tweet in the conversation and the conversation then grows in different directions as tweets are replied to. We used the Modularity algorithm (Blondel, et al., 2008; Lambiotte, et al., 2009) in Gephi (Bastian, et al., 2009) to divide the conversations into smaller segments. We then extracted the segments in which DOIs were present for further analysis. This entails that the entire conversations are not analysed here. Instead, the focus is to analyse how the articles are used in a conversational context. All but one of the articles used in these segments were from 2002 and onwards, with one article published in 1985. Prior to analysis, all user identifiers, including user handles mentioned in tweets, were anonymised.

To analyse in depth the argumentative exchanges, we used a combination of quantitative and qualitative content analysis. Our approach to content analysis takes off from White and Marsh (2006). In the quantitative version of content analysis, a coding scheme is produced a priori to analysis, based on theory. The qualitative version is in contrast inductive, where categories and relationships are derived from data units. The categories and relationships are constantly evolving as new data units are analysed. We developed a coding scheme based on argumentation theory with the following categories (Table 1):

To assess the authority, reliability, pertinence and consistency of the sources in respect to the argument and to the audience, we used the categories source type (open coding from the data) and argumentative usage. If the source was accessible, the following questions partially inspired by Wagemans (2011, p. 337) were addressed:


Table 1: Categories and codes. Each tweet was given one code for each of the categories.
Category Code
Argumentative approach logical
opinion (i.e., statement without support)
rhetorical 1.1 (credibility based persuasion, personal integrity)
rhetorical 1.2 (credibility based persuasion, scientific authority)
rhetorical 2 (emotional persuasion)
rhetorical 3 (persuasion through demonstration or giving proof)
Interaction type chained tweet
counter argument
counter argument against an established scientific position
disagreement on an established scientific position
generic agreement
generic disagreement
neutral
other
topic change
Authority type invested opinion (e.g., administrative or legal authority)
professional expert opinion (e.g., a virologist, doctor, researcher on the field)
experiential expert opinion (e.g., a nurse, a laboratory assistant)
open (referred source is not scientific, e.g., a blog, religious book)
not present
Traces of expert-to-expert
interaction
adherence to empirical content
coherence and consistency
deductive and inductive reasoning
not present
Tweet type statement (q is T, because q is Z) with source
statement (q is T, because q is Z) without source

Applying the method

External sources in online argumentation play an important role because they can be used, following Toulmin’s schema: as backing or rebuttal, in order to defend or reject a claim (Foderaro and Lorentzen, in press), to build an argument using the authority of the source. The following sub-sections present some argumentative patterns found through applying our method.

Adversarial expert-to-expert interactions: Climate change

colis2230fig1

Figure 1: Climate change: scientific authority type (left) and traces of expert-to-expert interactions (right).

The analysed segment shows clear signs of expert-to-expert argumentation. The type of authority debated was professional expert opinion, while the debating parts seem to belong to the same category of experts. The first part of the discussion was focused on the acceptability of the scientific authority of an arguer, while the second part was more about defending the sources provided by an intergovernmental organisation (i.e., invested opinion). Even if the arguers from both parts were professionals, we found argumentative patterns proving a presence of human caused climate change deniers among the experts.

colis2230fig2

Figure 2: Climate change: argumentative approach (left) and interaction type (right).

These patterns include different types of argument from authority as a case against human caused climate change, for example the interaction types disagreement and counter-argument against an established scientific position as well as the argumentative approach rhetorical 1.2, undermining scientific credibility. They also include a disjunctive syllogism and an appeal to ignorance, and finally the argumentative usage of sources (charts taken out of context without reference, blog posts and an outdated scientific article) as a part of the argument. Some of the arguments were implicit or sketch arguments.

The antagonising part, instead, made use of scientific publications for providing proofs to sustain a claim or a counter-argument (e.g., Foderaro and Lorentzen, in press). Here the interaction type was mostly counter-argument and the argumentative approach was rhetorical 3 (providing proofs) and logical. While Figure 2 shows argumentative approach and interaction type, there were also contrasting authority types in this argumentative exchange: the expert opinion of an individual and the invested opinion of a community of experts (not to be confused with argumentum ad populum which belongs to the same order of arguments).

Climate change is real (q) is true (T) because that (q) was said by a large body of scientists (Z).

is formally the same as claiming

Climate change is a fraud (q) is true (T) because that (q) was said by a scientist (Z)

but it has a different scientific weight grounded on consensus.

In the example above, it becomes clear how the argument from expert professional opinion, supported by peers in the relevant scientific community, is proposed as scientifically reliable. In this argumentative exchange, the part defending science made also use of arguments from collegial experts’ opinion and some ad hominem arguments. However, the latter were used consistently only by one arguer.

Adversarial multiple audience interactions: Abortion

colis2230fig3

Figure 3: Abortion: scientific authority type (left) and traces of expert-to-expert interactions (right).

What is typical of this interaction is the presence of a multiple audience debating a topic involving science, ethics and rights. Here the appealed authority discussed was not only scientific, but invested (legal and political) expert opinion and religious authority. Both of the adversarial parties made use of arguments from expert opinion together with scientific articles as part of the argument. Arguers against abortion however, made use of sources taken out of context, misinterpreting (deliberately or not) scientific findings and finally building weak arguments from analogy (e.g., Wagemans, 2018).

The following is an example of an argument from expert opinion with a scientific article as part of the argument (note: the findings in the article were not related to the arguer’s statement).

The fetal brain begins to develop during the third week of gestation (q is T) because that (q) was said by the source (Z).

Adversarial non-expert-to-non-expert interactions: Veganism

colis2230fig4
Figure 4: Veganism: argumentative approach (left) and interaction type (right).

In this conversation we did not find traces of expert-to-expert argumentative patterns. However, both of the debating parts made use of scientific arguments. Participants opposing veganism linked scientific publications improperly to build arguments from authority, confusing relationship between low levels of vitamin B12 and health issues with causation due to veganism. They often accused the antagonists of hypocrisy without any logical rationale and proved their points with anecdotal evidence. The part in favour of veganism, on the other hand, made use of weak arguments from analogy, sometimes refusing any possible relation between diet and health issues and accusing the adversarial part of presenting biased sources when instead the bias was in their arguments.

Discussion

Social media made science a topic of discussion for an increasingly rich and diversified audience that participates in debates not just to express an opinion but to make it count in the community of belonging. In online interactions, however, arguments can easily be perceived as sound and trusted without being true because the system structurally allows loss of context. In such a fragmented landscape, scientific argumentation can be paradoxically used against science. Reasonable and consistent arguments are crucial in the effort to reach consensus, but it is equally important to correctly inform the audience about the inconsistency or falsehood of the means of persuasion, giving the necessary tools to recognise them. A structured approach to argumentation analysis can be used to identify false or inconsistent arguments on a large scale.

Arguing on Twitter can generate patterns that contradict the real nature of argumentation, which is seeking for sharable conclusions and values by giving proofs for claims through a constructive and reasonable debate, as has been proven with the failure of political interactions (Lorentzen, 2016) and sociotechnical debates (Venturini and Munk, 2021). As Stevens and Casey (2021) explain, when an argument is adversarial, relevant reasons get ignored or rejected or, as we have demonstrated, not sufficiently checked.

Moreover, reasons and evidence should be consistent in scientific arguments. Building scientific arguments outside of a scientific context adds complexity to their evaluation and need a multidisciplinary structured approach. Our approach allows to scrutinise in context the authority or expertise of the sender through the rigor of its message. This approach sheds light not only on how science is used in arguments by experts, but also how it is understood and argued by other actors, including domain experts as well as the general public. Many different and valid methods for evaluating the impact of online scientific publications are already available, however the argumentative approach applied to online conversations on Twitter allows for insights into how science is used, argued and understood and what need to be improved for an effective scientific communication towards the public, to check arguments and facts in the context of the discussion and in its originally scientific context, and to assess if the argument is adequate to the intended audience and scientifically relevant in and outside the context of the conversation.

Approaching Twitter conversation argumentatively gives insights into how a scientific source can be used as part of an argument to provide proofs or to build arguments from authority based on individual expert opinion, collegial expert opinion and invested authority, and how this practice neither guarantees the truth of the argument (climate change, abortion, veganism) nor improve public trust on science (abortion, veganism). We have seen examples of scientific results taken out of context to support the argument, which has been found elsewhere (Lorentzen, 2021; Foderaro and Lorentzen, in press). Through analysing the consistency of the argument with the invoked authority of the source in its proper scientific context, we found different examples of misinterpreted scientific sources used as authoritative to build biased or weak arguments. Moreover, these sources were proposed as persuasive even when the debating audience was relying on other kind of authority. The question is if the practice of using science to build such arguments is improving public acceptance of its findings. Should scientific communication in digital environment focus on the soundness of the message and its adequacy to the target audience rather than, for example, the authority of the speaker and its expertise? We argue that because the trust in the senders of the message influence the acceptance of their arguments (Wagemans, 2011; O’Connor and Weatherall, 2019, chapter 2), the use of scientific publications in Twitter conversations should be studied argumentatively, focusing both on the rigor of the argument, the source and the consistency among them. This allows us to identify experts from pseudo-experts and reduce biases caused by misplaced trust.

Concluding remarks

The failure of digital debates depends mainly on the arguers who sometimes just want to provoke (seen in the discussion about veganism), or trigger reactions through the use of unacceptable, irrelevant or insufficient arguments (discussions about climate change, abortion) but are not interested in listening or accepting reasons (Bail, 2021) or misinterpret the adversarial position replying with inadequate arguments (Ballantyne, in press). It can also depend on other variables, as for instance: a) the debating audience rejects the appealed authority because it does not accept the expert opinion or does not recognise the opinion as coming from an expert (climate change, abortion); b) the sources are not accessible or not adequate to the intended audience (abortion); c) the debating audience is not interested in truths or reasons, but only to win a confrontation (climate change, veganism); d) scientific articles are used improperly together with weak or irrelevant arguments increasing misinterpretation of scientific findings and (false) polarisation (climate change, abortion, veganism); e) the arguers make use of scientific arguments to influence beliefs grounded in other kinds of authority (Zenker and Yu, 2020), for instance religious (abortion).

The use of scientific publications is not by itself an indicator of scientific expertise of the debaters. This not only because a person can be expert in a scientific domain and not in another and yet feel compelled to participate in discussions outside of one’s own area (Ballantyne, 2019), but because real expertise generates argumentative patterns that are consistent during the entire debate. These patterns become even more visible in adversarial interactions, where users react to arguments of others and the scientific sources linked to. Often a real expert does not need to share a scientific publication as a counter argument to prove the falsehood of an opponent point of view. By simply demonstrating why the argument is unacceptable and/or why the linked source is insufficient or inadequate (cherry- picked and/or irrelevant sources), debaters may prove expertise through the rigor of the argumentation and its scientific relevance within the domain discussed and, not last, their scientific ethical conduct during the online exchange. Approaching conversations argumentatively gains knowledge on the expertise of the sender in the scientific context in which the debate is conducted even if their professional identity is hidden or unknown.

Limitations

This paper has demonstrated a method to analyse Twitter conversations. It is limited in scope to tweets mentioning a DOI URL during a ten day period and the conversations these tweets are part of. The search query also affects the presence of participants with topical or scientific expertise. Another limitation lies in the criteria for selecting conversations and segments to analyse. While our attempt was to exemplify with the most common conversations with regards to length, the examples only cover the parts where a DOI is used, and these parts may be different compared to other parts of the conversations. We also wish to stress that the conversations can be very different in nature, and with that follows that the method may need to be adapted to the characteristics of the conversations.

Acknowledgements

The authors would like to thank the reviewers for their helpful comments, which have contributed to improve the quality of the article.

About the authors

Antonella Foderaro has a Master’s Degree in Library and Information Science from the Swedish School of Library and Information Science, University of Borås, Sweden.
David Gunnarsson Lorentzen is a Senior Lecturer at the Swedish School of Library and Information Science, University of Borås, 501 90 Borås, Sweden. He received his PhD from the University of Borås. He can be contacted at david.gunnarsson_lorentzen@hb.se.

References


How to cite this paper

Foderaro, A., & Gunnarsson Lorentzen, D. (2022). Facts and arguments checking: investigating the occurrence of scientific arguments on Twitter. In Proceedings of CoLIS, the 11th. International Conference on Conceptions of Library and Information Science, Oslo, Norway, May29 - June 1, 2022. Information Research, 27(Special issue), paper colis2230. Retrieved from http://InformationR.net/ir/27-SpIssue/CoLIS2022/colis2230.html https://doi.org/10.47989/colis2230

Check for citations, using Google Scholar