header
published quarterly by the university of borås, sweden

vol. 22 no. 1, March, 2017



Proceedings of the Ninth International Conference on Conceptions of Library and Information Science, Uppsala, Sweden, June 27-29, 2016

The value of discernment: Making use of interpretive flexibility in metadata generation and aggregation

Melanie Feinberg


Introduction. Although the interpretive, contingent nature of information systems is widely acknowledged in information studies, acceptance of this view has not significantly changed practices associated with metadata generation and aggregation. To move beyond this impasse, I propose that we consider forms of value produced through metadata creators’ creative, flexible interpretations of metadata schemas and associated vocabularies.
Method. I use interpretive analysis of three examples from two sets of video game metadata generated through two course projects.
Analysis. I demonstrate the forms of value that arise from understanding differences in order, combination, and selection of controlled vocabulary terms; from understanding the effects of claiming or delegating authorship of summary data; and from understanding differences in the level of abstraction at which resources are identified.
Results. When we understand how interpretive flexibility manifests in aggregated datasets, as demonstrated through the three analyses in this paper, we are able to more fully comprehend different points of view on information resources.
Conclusion. By characterizing the mechanisms through which the editorial discernment of metadata creators produces different kinds of value in aggregated datasets, we can develop information systems that work with our theoretical commitments, rather than against them.

Introduction

That information systems perform interpretively, rather than express a correspondence between documents and external reality, is widely accepted in information studies. When we collect, describe, relate, sort, filter, and retrieve information, we indicate how that information should be understood according to a particular, contingent, human perspective. Sometimes we do this in ways that seem very obvious, as when we place documents in subject classifications (indicating that a book about herbal remedies, for example, is “alternative medicine” or “traditional Chinese medicine” or just “medicine”—or folklore, or magic, or quackery). Sometimes we do this in ways that are more subtle, as when an indigenous mask becomes “art” when it’s in one kind of museum and “culture” when it’s in another kind of museum (Clifford, 1988). But we have few difficulties acknowledging that information systems enact particular arguments rather than documenting universal truth. Indeed, taking the trouble to clarify this as a background assumption for scholarly discourse seems almost unnecessary, rather like bothering to clarify that the earth is round or the sky is blue.

It’s strange, therefore, that no matter how pervasive the belief that information systems enact arguments, we still find it difficult to incorporate this realization into our descriptive practices: creating, aggregating, and making use of metadata. In 1968, Patrick Wilson declared that the subject of a document must be indeterminate, because there was no logically sound means of establishing a “correct” subject. And yet, as Jonathan Furner observed in 2010, people who create and apply classificatory devices to establish and relate subjects, and people who use such devices to find documents, continue to operate as if documents “have” subjects that can be accurately identified. Within philosophy, Furner contends, there is no support for the idea that subjects exist independently of people and can be discovered within works, like chemical elements exist in the world and can be discovered there. Nonetheless, Furner continues, in the context of working with information systems—designing and implementing and maintaining and using them— we still do our best to act as if aboutness can be reliably and accurately determined, as if we were analyzing the chemical composition of an unknown substance. Consistency, for example, remains one of the most pervasively used criteria of metadata quality, even as acceptance of a stance like Wilson’s implies that consistency in subject determination is impossible (and as inter-indexer reliability studies have also demonstrated) (Park, 2009; Markey, 1984; Olson and Wolfram, 2008).

If anything, this dissonance—where we “know” that information systems enact particular perspectives, and yet we nonetheless attempt to create, maintain, and use information systems as if it were possible to do so in a neutral, universal, scientific manner—is increasing, not diminishing. The conventional understanding of data aggregation depends upon a notion of system interoperability that invokes predictably reliable, accurate assignment of metadata values, as facilitated by the development of standardized schemas, vocabularies, encoding schemes, and content generation rules (Zeng and Chan, 2009). We envision the resulting integrated datasets, whether they comprise scientific observations, automated logs of Web site transactions, or descriptions of cultural heritage artifacts, as enabling us to discover new knowledge, develop new services, and envision new products.

There may indeed be tremendous opportunities to imagine and generate new forms of value from aggregated datasets. But it is dangerous to imagine that merely using the same schemas and vocabularies will reliably constrain interpretive flexibility—that we will determine the same subjects for documents if we use the same vocabularies, just as we might calculate the same pH of a sample using the same indicators and apparatus. (And, of course, the variability in a scientific measurement like pH assessment is likewise greater than one might naively think, as Geoffrey Bowker (2000) reminds us.)

If we hope to integrate datasets responsibly and use them well, we need to understand the kinds of interpretive differences that inevitably manifest even when standards are employed. If multiple metadata records for the same work assign different subject terms, for example, how are we to comprehend and use this information? Does understanding the range of diversity, for example, tell us something? I suggest that interpretive differences in metadata generation might themselves constitute useful evidence regarding the phenomena being described.

In this paper, I begin to examine interpretive flexibility in metadata generation from this perspective: how to productively reconcile ontological commitments on information systems with our practical decisions regarding their creation, maintenance, and use. First, I present a general discussion on the role of judgment in creating and applying information structures. Next, I work through three examples from metadata records generated by students in two separate master’s-level courses. Through these examples, I demonstrate some of the different modes of discernment that appear in the dataset and discuss how understanding these manifestations of interpretive flexibility produces alternate forms of value for the users of information systems.

The role of judgment in creating and applying information structures

In my first semester as a master’s student at what is now the iSchool at the University of California, Berkeley, our required course in information organization and retrieval initiated an ambitious class project in which all of the students in our cohort (about 40 of us) worked collaboratively to create a massive, faceted indexing vocabulary for a large set of digital photos that we had, as a class, earlier generated. My affinity for that project cemented my nascent goal of becoming a classificationist: I remember well the creative challenge of constructing a well-balanced, comprehensive, coherent set of concepts, and the collaborative challenge of obtaining agreement within the entire group on the vocabulary’s content, structure, and scope. I was very proud of our work, and the instructors assessed our product as excellent. Then, however, a question on the final exam made me realize that we had neglected to consider very basic interpretive dimensions regarding how our marvelous vocabulary might actually be applied to photos in practice. We had, for example, not considered questions of exhaustivity and specificity, or whether indexers should select terms for every item depicted in a photo, or only for the most important ones—and wait, how would an indexer decide what the “most important” items were?

Indeed, we had not anticipated any notion of editorial discernment in the implementation of our indexing language, as opposed to the creation of the language itself. When we were immersed in the project of designing the vocabulary, its application seemed like a mechanical task, not a creative or interpretive one. We assumed that, if we had indeed created a comprehensive and logical structure, then assigning terms to photos would be an activity of precision, like scientific measurement. Then, suddenly, in the final exam, I realized that the role of the indexer was not trivial: the utility, the meaning, and the character of any collection that the vocabulary was used with would be determined just as much by the decisions of many indexers as it would by the decisions of the vocabulary designers.

And yet even in this moment of enlightenment, I imagined that we would probably just need to create some detailed editorial guidelines, which would constrain the role of individual judgment regarding the indexing process. If our design was adequately documented, then the indexers would apply it properly (in other words, as we, the vocabulary creators, had intended). Each interpretive act would continue to rely primarily on the classificationist’s work, not on the classifier’s. We might, for example, describe rules for identifying the important elements of photos: the focal point of the composition, anything with proper names, the location. With the proper rules, reliable description would be possible.

I offer this anecdote here because I think that forgetting or minimizing the role of the classifier (or indexer, or cataloger, or metadata creator—I use all such terms interchangeably in this paper) is not merely the misapprehension of a student, but remains a representative position within information studies. Despite pervasive acceptance, within the field, that organizing systems enact purpose-driven interpretations of the world, instead of documenting its objective truth, we have made little progress when it comes to incorporating this realization in practice, particularly as regards the implementation of schemas and vocabularies, and not their design.

For example, Jens-Erik Mai persuasively demonstrates how the lingering influence of modernity informs continued attempts to create objective, global systems of information organization that can be reliably standardized across contexts (Mai, 2011). As part of his argument, Mai asserts that Clare Beghtol’s (2003) contention that “professional” and “naive” classifications are different activities is untenable, as is Elin Jacob’s (2004) distinction between everyday “categorization” and systematic, rigorous “classification.” Beghtol and Jacob claim that the organization of documents for retrieval, as performed by information professionals in line with established rules and standards, is different in kind from other forms of categorization (such as everyday discourse, in Jacob’s work, or in scholarly research, in Beghtol’s). Beghtol and Jacob’s belief in different “natural kinds” of organizing activities (the systematic and professional vs. the everyday and flexible) enables them to preserve the possibility that, with the guidance of established principles, rules, and procedures, some facsimile of neutral, objective, reliable classification can be attained by skilled experts.

Mai demonstrates the fallacies in such arguments. All classificatory processes, according to Mai, from everyday interactions between neighbors to the assignment of books to classes in the Dewey Decimal Classification, operate within a framework of principles, rules, and procedures—and yet all these processes, likewise, are also creative and flexible: they all involve a significant measure of editorial judgment. Even if we had developed detailed indexing guidelines for the photo vocabulary in my master’s course, any indexer’s application of those rules would have still functioned as a situationally contingent interpretive act, one in which the systematic application of the indexing rules could not reliably produce an appropriate outcome.

In his discussion, Mai pays particular attention to the role of global standards as a key element of the “modern” perspective on classificatory work. By making descriptive acts appear to be primarily matters of technical implementation (or rule following), standards enable classifiers to maintain an air of impartiality and objectivity. Mai declares that such reliance on rules is not merely wrong, but dangerous: “a declaration of neutrality is a declaration that one assumes that one’s view is a view from nowhere, that one somehow holds a view that is superior to others’ views” (Mai, 2011, p. 724). Mai suggests that information organization research and practice should abandon the modern perspective with its troubling ethical problems and impossible goals. Instead, Mai advocates for a pluralistic approach to information systems, one that acknowledges the situational, purpose-driven embeddedness of classificatory decision-making.

Few within information studies would substantively argue with Mai’s claims regarding the impossibility of achieving objectively reliable information systems (other examples of similar arguments include work by Jack Andersen (2006), Jonathan Furner (2010), and Joacim Hansson (2006), as well as Bowker and Star (2000).) And yet when it comes to the creation, use, and maintenance of actual systems, we nonetheless still seem to believe that implementing (if not designing) a metadata schema or controlled vocabulary is primarily a technical task, as guided by the firm constraints of rules and standards.

Why does faith in such rules persist despite their inevitable insufficiency? Certainly the plethora of detailed rules in library cataloging has not eliminated the employment of cataloger judgment. Allyson Carlyle (2015) describes a variety of situations in descriptive cataloging where rules have been insufficient in the face of deceptive works or complex realities, such as works authored by fictitious characters (e.g., a book that claims to be authored by Miss Piggy, the muppet). However, while Carlyle observes many situations in which catalogers must employ professional expertise in the interpretation of rules, especially in unforeseen contexts, she also sounds a note of caution. Enthusiastic cataloging students, she notes, sometimes believe that being user-centered enables them to violate core cataloging principles, such as representing an item as it represents itself. (Such a cataloger might break the rules to “fix” an apparent error in the catalog record.) In fact, Carlyle suggests, users’ needs are best facilitated when catalogers follow the rules as best they can, and not when catalogers enact potentially idiosyncratic approaches as individual interpreters. (In this, Carlyle echoes the position of Bernd Frohmann (1990) in his proposal to develop robust indexing guidelines based on documentary practices. Frohmann proposes that indexers need better rules, ones that explain and justify themselves based on accounts of what people do with documents.) Drawing on a set of interviews with professional catalogers, Gretchen Hoffman finds Carlyle’s view to be common. Catalogers, Hoffman (2009) claims, believe that following cataloging standards assiduously is their best means of serving users (as opposed to, for example, conducting user studies in their local contexts and developing situationally specific guidelines). Hoffman suggests that “To catalogers, standards represent users, so to follow standards is to meet users’ needs” (Hoffman, 2009, p. 635).

If user-centered practice is widely perceived as tightly coupled with following standards, it is not surprising that the three quality criteria identified as most prevalent in Park’s (2009) review of metadata assessment literature are accuracy, comprehensiveness, and consistency, which can all be linked to the implementation of standardized rules. Records are accurate if they employ the sources of information recommended by standards and follow standardized rules for data entry (spelling, capitalization, and so forth); they are comprehensive if they use the appropriate elements in standardized schemas, and they are consistent if the content within these standard elements is expressed with the same syntax and uses the same sorts of values (for example, if a single controlled vocabulary is used). Not surprisingly, therefore, Park remarks that “Metadata guidelines function as an essential mechanism for metadata creation and quality control” (Park, 2009, p. 221). Metadata that follows standards is good metadata.

If metadata that follows standards is not good metadata, on the other hand, then we are more likely to identify the solution in changing the standards, rather than adapting our understanding of standards themselves. Accordingly, while advocates of “critical cataloging” might campaign to change classification schemes or subject headings that they deem to be harmful, they typically focus on “eliminating bias” by removing outmoded terms. As Drabinski observes, critical catalogers hope to “fix” cataloging, not problematize the whole enterprise:

These critics of LCC and LCSH share one core belief: classification schedules and subject headings promulgated by the Library of Congress are often wrong and should be corrected. The problem is not that cataloging happens, but that it happens incorrectly. Critical catalogers are positioned as outsiders to the cataloging process, resisting biased controlled vocabularies and fixing LCSH for the rest of us. Missing from these arguments is a reckoning with the problem of cataloging itself. (Drabinski, 2012, p. 100)

Drabinski, an instructional librarian, focuses her attention on educating users to understand the catalog as an amalgamation of individual decisions set against the backdrop of evolving cataloging rules and information organization structures (subject headings, classification schemes). Drabinski’s approach complements the work of Tennis (2012) and Buckland (2012), who describe our changing perception of vocabularies over time as inevitable. (As Buckland puts it, subject headings of today represent the knowledge of the past as employed to anticipate the information-seeking needs and preferences of the future—of course some of these predictions are going to be wrong, and we will come to view some terms are offensive, old-fashioned, or not supported by current understanding of evidence.) But the point is not merely that category systems will never achieve unbiased perfection; it is also that the process of applying category systems to describe information resources—the implementation of classification, and not just its design—can never be stripped of interpretive flexibility.

Drabinski’s approach, which is inspired by queer theory, is helpful because it casts the existing library catalog as a pluralistic one, where many perspectives (in the form of many judgments and decisions) meet in the context of a single collection. (The catalog might not have been created to be pluralistic, but it is; as all implemented information systems to some extent must be. This is part of what Tennis’s work, and Buckland’s, as well as Mai’s, implies.) Drabinski’s understanding of the existing catalog as a pluralistic environment also provokes questioning of metadata quality. If the best use of the catalog involves teaching users to understand its elements of interpretive flexibility, then is metadata quality best achieved through accuracy, comprehensiveness, and consistency? How does pluralism manifest itself through creative, flexible interpretation of schemas, vocabularies, rules, and guidelines, and what kinds of utility can be achieved through understanding the effects of interpretive flexibility within a metadata system? The findings in the next section of this paper illustrate how we might begin to answer this important question.

Of course, the utility of interpretive flexibility will not be directed toward retrieval—or at least, it will not be directed toward the form of retrieval that we have for so long imagined is its objective, universal form. Aren’t our ideals of retrieval themselves fraught with the lingering influence of modernity? This is not a fanciful suggestion. In writing about note-taking practices in the early modern period, the historian Ann Blair describes two different ways in which notes were perceived as potentially valuable to others besides the notetaker: when the notes were produced by an eminent scholar, or when the notes were especially comprehensive and systematically organized. In the first case, the notes represent the uniquely fine discernment of a particular mind. In the second case, the notes are the result of meticulous, and yet somewhat mechanical, labor. In my reading of Blair’s account, these two forms of value are linked to different perspectives that Blair relates about the note-taking process. Some instruction manuals for early modern note-takers emphasize the role of discernment in selecting interesting passages from one’s sources and arranging these under headings that reveal the note-taker’s unique interest in the excerpt. Other instruction manuals emphasize the comprehensive decomposition of any potentially useful selection under standard headings for more reliable access over time.

In Blair’s book, Too Much to Know, the mode of note-taking focused on principled discernment regarding selection and organization loses favor to the mode of note-taking focused on standardized, comprehensive system-building. Blair traces within this systematic mode of note-taking the beginnings of early modern reference books, such as multi-volume compendia of excerpts, as well as bibliographies, library guides, dictionaries, and commonplace books. Blair’s focus is in viewing these early reference genres as forms of “information management” in terms of enabling access and discovery to a burgeoning landscape of works. She devotes much attention to the “finding devices” used in early reference genres, discussing the development of features that we now associate with information management: predictable access via alphabetized indexes, comprehensiveness of selection, and an accompanying position of neutrality for the compiler. While medieval authors of compiled quotations emphasized their unique ability to select the best bits from their sources (the “flowers” of a work; medieval compilations were accordingly called “florilegia”), early modern compilers deferred this judgment to the reader (or user). The compiler’s claim to scholarship shifts from one of particular discernment in the selection, arrangement, and relation of material to one of painstaking, but relatively “mechanical” labor. Concurrently, reference books are seen as useful “for all” people and purposes, and the mode of their production becomes more collaborative (helpers assist with the work, and new editions are passed to subsequent compilers, for example).

As one illustration of increasing emphasis on information management techniques in these new reference genres, Blair describes the emergence of alphabetical indexes as primary organizational structures, with a corresponding decrease in complex systematic arrangements. Early versions of a compendium called the Theatrum Humanae Vitae, prepared by the Swiss scholar Theodore Zwinger, included complex branching diagrams that presented and contextualized the work’s dense organizational structure. The Theatrum was an extensive multivolume work in many books, and each book began with its own branching diagram, which, according to Blair, “was meant to provide the rationale for the headings in the text and the order of their presentation by offering an ideal treatment of the topic.” While this arrangement worked poorly as a finding device, it “offered the logical scheme underlying Zwinger’s choice of headings and their arrangement” (Blair, p. 149). Although Blair does not emphasize this, the organizational structure was a means of demonstrating Zwinger’s particular scholarly discernment, a form of value focused on interpreting the selected material, rather than a form of value focused on retrieval. In subsequent editions of the Theatrum, new editors abandoned Zwinger’s intellectually sophisticated arrangement, opting for alphabetical order. In Blair’s tale, the form of value associated with retrieval is preferred to the form of value associated with unique discernment.

However, subsequent editors don’t bother rearranging the material within Zwinger’s original subheadings: “Many of Zwinger’s subdivisions were maintained within Beyerlinck’s articles (especially the longer ones) but no diagram was provided to chart them...” (Blair, p. 150). To me, this aside epitomizes one of the most interesting elements of Blair’s account: that which she doesn’t investigate, such as the elements of editorial judgment that undoubtedly persist (and that perhaps thrive!) within their external trappings of objective, predictable alphabetical order. In other words, Blair’s history does not (of course!) demonstrate that editorial judgment disappeared from these early modern reference genres; it demonstrates that such editorial judgment was no longer celebrated, and so it was left unacknowledged beneath veneers of presumably neutral and objective alphabetical arrangements. This is the same situation that we have today.

As I discuss the findings of my case studies in the next section, I don’t mean to suggest that the form of value associated with standardized, comprehensive system-building—that of modern, retrieval-oriented information management—is worthless. Of course it is valuable. But if, as we invariably agree when pressed, this comprehensive, standardized system-building cannot achieve the false perfection that it seems to promise, then it seems worthwhile to rediscover alternate forms of value that we might gain from identifying and characterizing the role of discernment in all activities of selection, arrangement, and description. Moreover, by understanding how metadata works to generate meaning in a more robust way, we can make better, more informed use of our information systems. When Bruno Latour demonstrates the processes by which scientists reduce the unruly complexities of the Brazilian landscape to a set of abstract patterns, from which to derive highly simplified conclusions, he doesn’t do so to throw out the scientific method and its useful products. He makes the work of scientists seem magical and slightly absurd so that we can do science, and use the outcomes of science, with more awareness of our activities and their consequences. Similarly, when Geoffrey Bowker writes about the many challenges associated with making biodiversity data interoperable, he doesn’t do so to imply that we can achieve some perfect state of awareness (just like we can’t “fix” the “bias” in subject headings). Yes, we can always revise our schemas, vocabularies, and guidelines to better constrain interpretive flexibility—but we might also locate (and celebrate) different forms of value within (the inevitable) acts of discernment. The next section provides three examples of this.

Value from discernment in metadata generation and aggregation: three examples

The following three examples are drawn from data collected via two master’s-level course projects, conducted in the spring and fall semesters of 2015.

The first course (A), was an introductory elective class in information organization. In this project, 15 master’s students each generated 10 metadata records using version 2.1 of a video game metadata schema designed by Jin Ha Lee and her GAMER group colleagues at the University of Washington (UW GAMER group, 2015), along with several controlled vocabularies designed by the GAMER group for use with the video game schema (Mood, Visual Style, Genre, and Narrative Genre). Each student created records for three common games, selected in consultation with the GAMER group as being illustrative of various game types: Skyrim, Journey, and Final Fantasy 7. The remaining seven records were created for games selected individually by each student. There were no restrictions on which games students could pick (and so they could select duplicates of each other). Students created their records two ways: in an Excel spreadsheet and in a collection management system, Collective Access, that had been set up in accordance with the video game metadata schema and its vocabularies.

After creating this dataset of 150 metadata records, the students were tasked with analyzing the dataset to each develop their own position on the role of interpretive flexibility in metadata generation and aggregation. Their individual thinking in this area was supplemented with in-class exercises and discussion meant to focus attention on areas of relative agreement and disagreement in the video game metadata, and what the effects of those similarities and differences might be. Students wrote 3000-word essays describing their ideas and proposals regarding semantic diversity in metadata, both for the specific case of the video game metadata and in general. (All of the students in course A agreed to allow these essays to be used for research purposes, following approved IRB processes.)

The resulting data from course A was therefore a metadata collection of 150 records (in both spreadsheet form and in Collective Access), supplemented with a set of 15 student essays. The use of students, rather than working professionals, to generate and reflect on the metadata was strategic. These students spent a semester thinking quite hard about metadata in both practical and theoretical terms; in a previous course project, they had all designed individual metadata schemas and written detailed guidelines for the schemas’ implementation. Moreover, they were able to read various articles about the creation and evolution of the video game metadata schema (Lee, et al, 2013; Lee, et al, 2015), as well as more theoretical treatments such as (Mai, 2011; Tennis, 2012; Buckland, 2012). Similarly, the choice of video game description was strategically motivated. As described by Lee, et al, 2013, video games are very complex works, with immense diversity in form, structure, and content; video games, therefore, represent a descriptive challenge that holds significant potential gain from standardization. The combination of a semantically complex set of entities with an emerging set of standards (as designed by Lee’s GAMER group) constitutes an excellent environment to explore issues of interpretive flexibility in metadata. Moreover, some of the 15 students were enthusiastic gamers, while others had little familiarity with video games.

The second course (B) was an advanced elective class in “metadata architectures,” taught online at a different university. The project for this course involved an additional component. Prior to generating metadata, the 11 students were split into 4 groups, with each group assigned to a different user community: public libraries, digital archives, museum informatics, and digital humanities. Each of these 4 groups wrote a set of local implementation guidelines for their assigned community to apply version 2.1 of Lee’s video game metadata schema. Then, in the metadata generation phase of the project, each student contributed 12 records: 6 records using the GAMER group’s schema documentation as supplemented by the local guidelines they had generated in their groups, and 6 records using a different group’s local guidelines. (And so, for example, a student in the public libraries group would describe 6 records with the public libraries guidelines and 6 records with the museum informatics guidelines.) Each set of 6 records included 2 common games: for their own guidelines, Final Fantasy 7 and Skyrim, and for the other group’s guidelines, Journey and Skyrim (and so Skyrim was described by each student twice, using two different sets of guidelines).

The resulting data from course B was therefore a metadata collection of 132 records (in both spreadsheet form and in Collective Access), supplemented with a set of 8 student essays (3 students either declined to participate in the study or did not submit informed consent forms).

Combining course A and course B, we have a metadata collection of 282 records, including 2 works (Journey and Final Fantasy 7) with 26 records each, and 1 work (Skyrim) with 37 records. These records are supplemented by 23 total student essays.

In the following sections, I consider three illustrative examples of interpretive flexibility from this dataset and discuss how differences in metadata generation can produce specific forms of value when the metadata is aggregated. In creating these three accounts, I focus on surfacing different types of interpretive flexibility that appear in the dataset, and on characterizing the forms of value we might derive from considering such variation as a resource to be exploited, rather than as a problem to be fixed. As such, these discussions do not represent an exhaustive, comprehensive analysis of this data. By suggesting how we might both learn from and make use of interpretive flexibility in metadata generation, these preliminary, partial analyses are meant to represent the potential of this general approach.

Example 1: Order and combination in selected controlled vocabulary terms (the Mood element)

For my first example, I use the application of the Mood element for a single video game to show how the aggregated effects of order and combination in selected controlled vocabulary terms reveal several distinct modes of editorial discernment. In depicting multiple coherent accounts of the mood of this game, these modes of discernment constitute potentially valuable information for dataset users.

In the documentation for version 2.1 video game metadata schema, the Mood element is described like this:

Definition: The pervading atmosphere or tone of the video game which evokes or recalls a certain emotion or state of mind.

Instruction: Identify the prevailing mood(s) of the game according to the CSI; for most games, the experience of playing the game or watching the gameplay video may be the most reliable source of this information. Select the most appropriate term(s) from the CV for this element. If no mood is applicable, write N/A.

For physical games, the schema documentation specifies the CSI, or chief source of information, to be (in this order), the box, the manual, the storage media (like a disk), the game title screen or credits, and the experience of playing the game. For digital games, the CSI is specified as the information page on the official Web site or app store; the game title screen or credits, and the experience of playing the game. Preferred secondary sources (if the CSI is insufficient or difficult to obtain) include (again, in order) the official Web site, official YouTube videos, magazine articles, strategy guides, fan wikis or Web sites, Wikipedia, and GameFaqs (a Web site that documents video games).

We used version 3.1 of the Mood controlled vocabulary created by the GAMER group. Preferred terms in this vocabulary included:

All preferred terms in the Mood vocabulary are defined in scope notes; examples of games that fit that term (in the opinion of the vocabulary creators) are given. The vocabulary also includes synonyms (for example, synonyms for Horror include Disturbing, Eerie, Macabre, Paranoid, Scary, Unsettling).

For this example, I discuss the Mood values given to one of the three common games, Final Fantasy 7. In the Mood vocabulary documentation, Final Fantasy 7 was one of the provided examples for the preferred term “Sad.”

From course A, the Mood values were:

One can make a number of interesting observations from this distribution of interpretive judgments regarding the mood of Final Fantasy 7. While 10 different terms were used to describe the mood of this game, in different combinations, 10 of the terms were never used. Moreover, while 4 terms (Romantic, Aggressive, Immersive, and Imaginative) were used only once, the rest were all used multiple times:

Could it be, perhaps, that, as is sometimes asserted for social classification systems, the Mood choices might converge around these more popular terms? If this would be the case, then we might “solve” the problem of interpretive flexibility by taking the three or so most popular terms.

If we consider the order and combination of terms, in addition to their prevalence, however, a more complicated picture emerges. In 5 out of 6 uses, Sad is applied by itself (perhaps because these metadata creators accepted the vocabulary designers’ opinion about Final Fantasy 7’s mood; but intention is not necessary to interpret these decisions). Adventurous is seldom used alone (once). Adventurous is used with Comradery three times; with Intense twice, and with Mysterious twice. It is used with Sad only once out of 15 records, even though Sad and Adventurous are otherwise the most commonly applied terms.

From this evidence, it seems that there is not a single account of the mood of Final Fantasy 7; it is not (in all but one determination) Sad and Adventurous. Instead, there are several competing accounts of the mood. In one, Final Fantasy 7 is Sad; in another, Final Fantasy 7 is Adventurous (and involves Comradery in an Intense, Mysterious way).

The data from course B adds to this interpretation.

From course B, the Mood values were:

Three of the values are blank because one of the sets of local guidelines (for public libraries) determined that Mood was unlikely to be useful in that context and directed metadata creators to skip it. (The other blank value was an independent decision of the metadata creator.)

In this set of values, we see even less diversity in term prevalence than in course A: only 6 out of 20 terms are used, all of them more than once:

Still, the overall picture of the Mood element in Final Fantasy 7 is more complex, despite regularities in term usage, and not less. In considering the order and combination of terms, we see that Sad and Adventurous are still most often used separately, and not together. Sad is the only term used alone or with one other single term (Dark); Adventurous is always used with at least 2 other terms.

Altogether, the evidence from the 26 records in course A and course B continues to provide support for two complementary interpretations of Final Fantasy 7’s mood: one, that it is primarily Sad (perhaps in agreement with the Mood vocabulary’s designers); and two, that it is a more complex mixture focused around Adventurous, with elements of Comradery, Intense, Mysterious, and Immersive. Both of these interpretations occasionally involve Dark, and this term may form a conceptual bridge between the two accounts (in course A, there were two records that used Dark and neither Sad nor Adventurous). Moreover, from course B, we have a competing interpretation that claims Mood is irrelevant for describing games.

What does such an admittedly simplistic analysis of interpretive flexibility for a single game show? Certainly, it does not represent a definitive interpretation of Final Fantasy 7’s mood. But it does, I suggest, demonstrate several points regarding the potential value of interpretive flexibility in metadata generation and aggregation. This analysis shows that, at least in this example, there is neither descriptive chaos nor convergent regularity, despite the use of a standard schema and associated controlled vocabulary. In some ways, the data is remarkably consistent for an unavoidably subjective, interpretive attribute—no one thinks that Final Fantasy 7 is Cute or Humorous. And yet, just as clearly, there are multiple, distinct, principled differences in judgment. Importantly, these particularities of discernment appear when we consider not just which terms are used, but how the terms are used. Here, I looked at order and combination of terms, but there are many and more complex potential modes of interpretive analysis that could be employed here. For example, one could attempt to identify general curatorial styles by looking at such choices across the records of different metadata creators (do the people who used the single term Sad here always use single terms for this or other such elements?). One could attempt to identify the intersection of individual style with the influence of different local guidelines (perhaps by comparing different styles across course A and course B). Indeed, the possibilities are almost endless—and likewise rich with potential utility. By understanding how pluralism manifests within an aggregated dataset, we can, like medieval curators of florilegia, facilitate informed and inventive selection and use of material, putting the discernment of metadata creators to work for the user.

The aggregated effects of claiming or delegating authorship (the Summary element)

As my second example, I use the aggregated effects of claiming or delegating authorship in the Summary element, again looking at the example of a single game, to demonstrate several modes of editorial discernment in the application of this element.

The documentation for version 2.1 of the video game metadata schema provides this information about the Summary element:

Definition: A brief statement or account of the main points of the game.

Instruction: Write a brief summary of the game's narrative and/or main features in a free text form.

In course A, values in the Summary field represent various modes of discernment. Some of the summary values are overtly transcribed from other sources, which are cited. For example:

“Skyrim's main story revolves around the player character's efforts to defeat Alduin the World-Eater, a dragon who is prophesied to destroy the world. Set two hundred years after the events of Oblivion, the game takes place in the fictional province of Skyrim. Over the course of the game, the player completes quests and develops their character by improving their skills.” - Wikipedia (http://en.wikipedia.org/wiki/The_Elder_Scrolls_V:_Skyrim)

Seen in isolation, the Wikipedia summary does not seem to represent much judgment on the part of the metadata creator. But other transcribed summaries present the game differently, such as:

The player begins the game imprisoned by the Imperial Legion, being led to their execution as a result of crossing the border into Skyrim. As the player lays their head on the chopping block, the dragon Alduin attacks. In the midst of the chaos, several Stormcloaks, along with their leader and fellow prisoner, Ulfric Stormcloak, assist in the player's escape. The player may choose between the assistance of Ralof, the Stormcloak who arrived with Ulfric, or Hadvar, the Imperial soldier responsible for reading off the names of the prisoners being sent to their execution. The player later learns that he/she is Dovahkiin, or Dragonborn, a person charged with the duty of defeating Alduin and the dragons. Eventually, the player meets Delphine, and Esbern, two of the last remaining Blades.

(This summary is not cited, but it is taken directly from a Web site called Best Video Games of All Time; I determined this by inputting the summary into a Web search engine.)

Another completely different summary is:

The Empire of Tamriel is on the edge. The High King of Skyrim has been murdered. Alliances form as claims to the throne are made. In the midst of this conflict, a far more dangerous, ancient evil is awakened. Dragons, long lost to the passages of the Elder Scrolls, have returned to Tamriel. The future of Skyrim, even the Empire itself, hangs in the balance as they wait for the prophesized Dragonborn to come; a hero born with the power of The Voice, and the only one who can stand amongst the dragons.

(Again, this summary is not cited, but a Web search determined that the original source for this summary comes directly from the official game site.)

In course A, 10 apparently transcribed summaries are taken from 6 different sources, only 2 of which are explicitly cited with references (another uses quotation marks but doesn’t include a source). The Wikipedia summary is used 4 times: it is used verbatim in another record (this time without citation); it is also used in abbreviated form once (uncited) and the first sentence is used in combination with other content (also uncited) in another record. The summary from the official game site (“The Empire of Tamriel is on the edge”) is used twice, once in abbreviated form.

As a metadata strategy, transcription is typically employed (for example, in library cataloging) to increase authoritativeness, credibility, and, objectivity. In library cataloging, the title page is designated as the chief source of information because it is perceived as an accurate, trustworthy, and objective source. (We might see the cover as being created partly to sell the book, for example, but we don’t see the title page that way, at least not today; from a historical perspective, however, Ann Blair’s book describes early modern title pages as being written to advertise key features, like an index, to potential customers.) Video games don’t have a pervasive, apparently neutral content element like a title page. The “official” chief sources of information suggested in the schema documentation serve multiple purposes, including marketing and instruction for players (functioning more, perhaps, like early modern title pages described by Blair). The preferred secondary sources may also have (different) multiple purposes, including the cultivation of a particular audience (for game-related commercial publications as well as fan sites and wikis). For video games, deciding to transcribe the summary requires the metadata creator to make multiple editorial judgments. In the case of course A’s implementation of Skyrim, these represent three strategies:

Also of interest here is the focus of each transcribed summary. The official Web site emphasizes the atmosphere of the world in which the game is played; fan sites emphasize details of characters and events in that world; Wikipedia focuses on the player’s potential actions and in-game goals. Although the metadata creator who elects to transcribe a summary might be delegating authorship of this element in a technical sense, the creator’s judgment is still very much at play.

In contrast, the 3 summaries apparently written by the metadata creators present a different view than the transcribed material (2 records in course A left this field blank). These authored summaries focus on the player’s experience, sometimes omitting all details of the gameworld and overarching narrative:

An open world fantasy that allows the player to explore the vitural (sic) world as any character and create your own adventure. (summary 1)

Player tries to defeat evil dragon who is trying to destroy the world. (summary 2)

After escaping executing, (sic) the player develops necessary skills in a nearby village before setting off on various quests, including killing a dragon. Dragons pose a constant threat to the game's world. (summary 3)

While the details of the game are vague, the language in these summaries is plain and straightforward: much more “neutral” than that of the official game site, for example.

As implemented in course A, choosing to claim or delegate authorship of the summaries—to transcribe or create—shows equal measures of editorial judgment in terms of the type of content included (focused on the gameworld atmosphere, on game narrative details, or on player experience) and on the content tone (to render the game simple or complex, described plainly or evocatively). But those choosing to delegate authorship display additional forms of judgment: which source to select, and whether and how to cite the selected sources.

As with the Mood element, the data from course B adds nuance to this account. In course B, there were 22 records for the Skyrim Summary element, because Skyrim was described twice for each participant (once with the guidelines created by the participant’s group and once by another group). The local guidelines developed in course B did not significantly extend the schema documentation for this element, although one set of guidelines mentioned that sources should be cited (the guidelines did not specify how). While many of the 11 participants used the same summaries for both versions of the Skyrim records, 3 used different summaries for each (there was no correlation with local guidelines used).

In course B, 12 of the 22 records used the version from the official Skyrim Web site (“The Empire of Tamriel is on the edge”). Of these, 2 cited this content as from the Internet Movie Database (IMDB); and 6 cited it as from the IGN Web site (a publisher of game-related media); the remaining 4 instances are uncited. An additional 6 of 22 records transcribe summaries from 4 sources (two each from Moby Games and the ESRB, and one each from Steampowered and Wikipedia). Four records appear to be written by metadata authors:

During a civil war, a regular person is revealed to a chosen hero, called a Dragonborn, who uses magic to defeat dragon's attempt at world domination. (summary 1)

A civil war has started between the stormclock (sic) confederates of the north and the Empire to the south. The main character is right in the middle of this conflict as the game begins with your character about to be executed along side the leader of the rebellion, why you are there is unclear, as you are about to be killed a dragon appears and throws the execution into chaos, you are then free to discover the world of Skyrim and your connection to the dragons and can sway the balance of power in Skyrim towards the imperials or the stormcloaks (summary 2, appears twice)

First person 'dungeons and dragons' role playng (sic) game. Free exploration of a magical realm, including dungeons, caves, castles, and towns. Interact with all elements of the world. Multiple quest lines and embedded stories of revolutions, intrigue, magic, and myth. (summary 3)

The prevalence of the “official” version in course B is interesting in multiple ways: the content and stylistic differences from other summaries, as described in the previous section, become much more salient when this version—with its orientation towards selling the game—becomes the representative one. Also, although this version does arise from the “official” site, that’s not where the metadata creators in course B cited it. Where the text is obtained and where it originates are revealed as two separate elements.

The summaries where metadata creators claim authorship—where they write summaries instead of transcribe them—are rather surprisingly similar to those in course A. Once again, their content focuses on player experience, rather than on game-world atmosphere or specific details. Although Summary 2 includes more information than the others, it still focuses on the perspective of the player character and omits proper names of other characters and places (Tamriel, Ulfric, Skyrim).

As with the example of the Mood element, this (once again, admittedly simplistic) analysis of the Summary element illustrates how pluralism manifests within this aggregated dataset. Decisions about what kinds of content to include in a summary, and about the style in which that content is presented, surface whether the metadata creator chooses to transcribe or write the summary. As with the multiple accounts that emerge for the Mood element of Final Fantasy 7, multiple accounts emerge here for the summary of Skyrim: some focusing on the atmosphere of the game world, others on the details of the gameplay narrative, and others on the player experience. These align with basic source choices: the “official” version (mirrored on the IGN game site), those from fan sources, those from Wikipedia, and those written by metadata creators themselves. By drawing out such distinctions in overall modes of discernment, we can help users make optimal use of the aggregated dataset.

The aggregated effects of indeterminacy in resource identification (the Platform element)

This final example discusses the aggregated effects of indeterminacy in the entity being described, as illustrated through the Platform element of the video game metadata schema. Compared to the mood of a game and its summary, platform seems like a much more objective attribute, less dependent on the interpretation of the metadata creator. The variation in this element, however, is just as significant as the previous two examples. Because video games are complex born-digital works with many versions, determining what information to include for its platform requires significant judgment. Once again, I’m using Skyrim to illustrate this example, because it is available for multiple platforms and demonstrates this case well.

Instructions for the Platform element in the documentation for version 2.1 of the video game metadata schema are:

Definition: The hardware and operating system on which the game was designed to be played.

Instruction: Transcribe the platform(s) for which the game is made as it appears on the CSI. If no platform information is readily available from CSI, enter the value as “unknown”.

Examples: Playstation 3, XBOX 360, Nintendo 3DS, Android 4.4 KitKat, Apple iOS, PC Windows XP, Mac OS X

For course A, price and platform information for Skyrim was described as follows:

This set of values reveals a number of complexities. Version 2.1 of the video game metadata schema does not specify a level of abstraction with which to identify resources (for example, as analogous to the work, expression, manifestation, and item set of related entities in Functional Requirements for Bibliographic Records, or FRBR). In describing the development of the 1.0 version of the schema, Lee, et al (2013) note that the schema elements imply a level corresponding to that of manifestation (because, for example, the schema emphasizes the “box” as the chief source of information, or CSI). But many games have some versions that can be downloaded directly from a Web site, while others may be sold on storage media in a box; information that pertains to all versions may nonetheless be collocated on a single Web page—also acceptable as a CSI, according to the schema documentation. Moreover, many of the 2.1 version video game metadata schema elements (such as Mood and Summary) appear to be applicable at the level of expression or work. Given such circumstances, individual metadata creators may reasonably determine (consciously or unconsciously) that the schema implies expression-level description, because those levels of abstraction seem useful and natural. These metadata creators would include all available platforms in this element. Other metadata creators may determine (again, consciously or unconsciously) or that the schema implies description at varying levels of abstraction for different elements, because that seems useful and natural. These metadata creators might include either those platforms available via a single distribution method (for example, only for the Xbox 360, which is available in a box) or multiple platforms (all versions available in different methods of distribution). Yet other metadata creators may infer that the schema implies description at a consistently lower level of abstraction, similar to that of manifestation.

The other aspect of judgment with the Platform element involves its definition as “the hardware and operating system upon which the game is meant to be played.” For some platforms, the operating system and hardware are, or appear to be, completely integrated (Xbox 360, Playstation 3). For computers, the operating system and hardware are more independent of each other; however, this depends upon the detail in which one refers to the operating system: a Windows PC describes both hardware and operating system, just at a lower level of detail than Windows 7/XP/Vista PC. (The schema documentation implies that both of these strategies are acceptable, as it includes examples both with and without version information for the operating system.) Moreover, while it is possible to have a PC that runs a different operating system than Windows (Linux, for example), it is also not unreasonable for a metadata creator to assume that “PC” implies “Windows” or that “Windows” implies a “PC”—this is consistent with the way that Xbox 360 and Playstation 3 are expressed.

The interpretive flexibility exhibited in the Platform element appears less valuable, on the surface, than the diverse modes of discernment at play with the Mood and Summary elements. Indeed, it seems easy to dismiss the variation here as evidence of errors that need to be fixed, perhaps by writing more detailed and robust indexing guidelines. In their reflection essays, the students in course A identified differences in resource identification (that is, the level of abstraction for description) as problems that they wanted to eliminate by revising the schema or creating better documentation. Participant A_09 declared, for example, that “the biggest issue...is the uncertainty about the level of description.” Participant A_09’s general proposal, based on the experience of generating metadata with the video game schema and analyzing the aggregated dataset, was that “the more objective the element, the less semantic diversity is allowed,” and other students made similar assessments. Participant A_05, for example, declared that “ ‘bad semantic diversity leads to confusion and is based on uncertainty or error; ‘good’ semantic diversity results in richness and is based in interpretation and perspective.” Many participants made such distinctions between “functional” or “technical” metadata (elements like Platform) and “descriptive” metadata (elements like Mood), asserting that functional or technical elements were more objective and that interpretive flexibility in the implementation of these elements was not valuable or was harmful, while interpretive flexibility of “descriptive” elements was interesting and potentially useful. Such students often suggested revisions to the metadata schema or its documentation to make “mistakes” in the implementation of functional or technical elements less likely to occur.

The range of diversity in more “objective” technical or functional metadata elements was surprising to the students in course A, both in form and in degree. (In contrast, they tended to be surprised that there wasn’t a wider range of diversity in the descriptive elements like Mood, because those elements were more subjective, but also because many participants felt that only video game experts would be able to make such determinations accurately—even those participants who identified themselves as gamers felt like they lacked the appropriate expertise to make sufficiently correct assessments of elements such as Mood.) While the greatest tendency in responding to this surprise was that of A_09 and A_05, to propose stronger regimes of interpretive constraint, there were other reactions as well. Some participants, such as A_12, concluded that, no matter the level of control employed, the implementation of any element would retain some level of unpredictability: “accuracy cannot be guaranteed; in fact, what can be guaranteed is that inaccuracies will occur.” Upon inspection of the aggregated dataset, others found the interpretive differences in both “technical” and “descriptive” metadata to be surprisingly revealing: A_06 wrote that

Rather than becoming frustrated, I felt that reading multiple records formed a more holistic view of the video games in a way that one record could not. Each additional record added another understanding and viewpoint about a video game, and I felt that I had a better understanding of what a game could possibly mean to all its users, and to the potential stakeholders using the metadata schema.

Nonetheless, differences in the level of abstraction for resource identification were bothersome to A_06, who found this ambiguity especially vexing as a metadata creator. It made A_06 and other participants anxious when they could determine multiple reasonable approaches to a single element (such as describing at the expression level or manifestation level). Just as they worried about the level of subject expertise required to make “correct” assessments of descriptive metadata elements, students who identified multiple reasonable approaches to “technical” elements also worried that their choices would not be the “correct” ones. (In fact, however, the aggregated data shows essentially no difference between the ultimate decisions made by metadata creators with varying levels of subject expertise. Gamers and non-gamers did not select different sorts of values for Mood, for example.)

What makes the example of the Platform element so fascinating is that both the concerns of participants like A_09 and A_05 and the enthusiasm of participants like A_06 merit equal consideration. On the one hand, the 1:1 principle, which declares that each description shall apply to a single resource, underlies the Dublin Core Abstract Model and verifies that the object of each metadata “statement” is clearly understood by the metadata creator and user (Powell, et al, 2007). When a single record includes elements that refer to an entity at different levels of abstraction, the object of any statement in the record becomes unclear and potentially suspect. On the other hand, as Urban (2014) describes, Dublin Core metadata creators routinely violate the 1:1 principle, just as the participants in this study did. It does not seem productive to dismiss such tendencies as merely the result of pervasive carelessness on the part of ill-trained metadata creators. The diversity of rational, defensible approaches to the Platform element demonstrates what we all know: the identity of digital resources is a complex, unruly phenomenon that can only be partially controlled with better training or better rules. What the example of the Platform element additionally demonstrates is how such “violations” provide compelling evidence of that complexity—evidence that has its own form of value. Precise resource identification for this particular video game (Skyrim) is a contested and difficult issue to contend with, and that’s valuable information: if resource identification is challenging for the metadata creators, it is also challenging for the metadata users.

As with the previous two examples, considering the data from course B makes this picture even more vivid. In course B, two of the sets of local guidelines that participants created and used to supplement the video game metadata schema documentation provided additional detail for the platform element. One, created with the digital archives community in mind, directed the metadata creator to describe only the “item in hand.” Another, created with the museum informatics community in mind, directed the metadata creator to list “all available platforms.” The following table shows the resulting metadata created in course B:


Table 1: Platform information for Skyrim, Course B (includes cataloger IDs and the local guidelines used)
Platform Metadata creator ID Local guidelines used
PC; PlayStation 3; Xbox 360; PC Download COURSE B_04 01 (public libraries)
Xbox 360 COURSE B_07 01 (public libraries)
Windows; Playstation 3; XBox 360 COURSE B_05 01 (public libraries)
Microsoft Windows COURSE B_08 01 (public libraries)
PC COURSE B_13 01 (public libraries)
Digital Download Microsoft Windows COURSE B_06 02 (digital archives)
PC COURSE B_12 02 (digital archives)
PC COURSE B_10 02 (digital archives)
Xbox 360 COURSE B_02 02 (digital archives)
Microsoft Windows COURSE B_11 02 (digital archives)
Xbox 360 COURSE B_03 02 (digital archives)
PC; PlayStation 3; Xbox 360; PC Download COURSE B_04 03 (museum informatics)
Windows; XBox 360; XBox Live; PlayStation 4; PlayStation 3 COURSE B_05 03 (museum informatics)
Microsoft Windows, Playstation 3, Xbox 360 COURSE B_12 03 (museum informatics)
Xbox 360; PlayStation 3; Microsoft Windows COURSE B_02 03 (museum informatics)
Microsoft Windows COURSE B_11 03 (museum informatics)
Microsoft Windows COURSE B_06 04 (digital humanities)
PC COURSE B_10 04 (digital humanities)
Microsoft Windows COURSE B_08 04 (digital humanities)
Microsoft Windows; Playstation 3; Xbox 360 COURSE B_07 04 (digital humanities)
Xbox 360 COURSE B _03 04 (digital humanities)
Personal Computer (PC) COURSE B_13 04 (digital humanities))

The evidence from course B provides even more justification for an approach that doesn’t seek (or doesn’t only seek) to “fix” the conceptual difficulties attendant upon resource identification in the digital environment, but to figure out how best to exploit the different interpretations of this situation, to facilitate the nuanced understanding of data users and creators. The four sets of local guidelines set forth three approaches: one, to leave interpretation open to the metadata creator (guidelines 01 and 04); two, to constrain resource identification to “the item in hand” (guidelines 02); and three, to constrain resource identification to the set of all available platforms (guidelines 03). Course B’s data exhibits, as we might expect from course A, significant variation in approach when guidelines 01 and 04 are applied. But we also see some variation within the metadata created with guidelines 02 and 03—for example in what each “platform” option is, and how many there are. These interpretive differences—both in the different approaches adopted by the various local guidelines and in the different application of those guidelines by individual creators—help to convey a richer understanding of the game, Skyrim, as a complex digital entity with many versions that differ in complex, intersecting ways.

Conclusion

This paper began by observing that it has been difficult to adapt practices regarding metadata generation and aggregation to incorporate our theoretical understanding of information structures (metadata schemas, classification schemes) as contingent, dynamically configured, interpretive devices. Through a brief examination of selected literature, I suggested that this dissonance arises from a conviction that serving information users entails traditional notions of metadata quality, such as accuracy, comprehensiveness, and consistency, notions that become complicated when the interpretive aspects of information systems are foregrounded. I suggested that we might accordingly contemplate (or recover) forms of value associated with particular modes of discernment, in addition to the values of comprehensive system-building. (As related by the historian Ann Blair, such forms of value were appreciated more widely prior to the modern period.)

In the second part of the paper, I related three examples, drawn from data generated through two class projects (A and B), that surfaced different kinds of value from creative, flexible interpretations of a video game metadata schema by student metadata creators. The first example looked at variation in selection, order, and combination in the application of controlled vocabulary terms (the Mood element of the schema). Analysis of this example showed how multiple, distinct accounts of a video game’s mood might be discerned through the implementation of this element (as opposed to a single, chaotic account). The second example looked at the effects of delegating or claiming authorship for the Summary element of the schema, showing how the choice to delegate authorship through a transcription strategy revealed more variation in interpretive decisions than the choice to claim authorship by writing one’s own summary. The third example focused on the Platform element, which appears more “functional” or “technical” (to use the terms of course A participants) than the more “descriptive” Mood and Summary elements. This element displayed a significant level of interpretive flexibility in its implementation, which seemed problematic to many Course A participants, as it would to many established researchers and practitioners working with metadata. Nonetheless, the modes of discernment exhibited here also generated forms of value for information users, demonstrating the complex and contested nature of identity for digital games in general, and for Skyrim (the example) in particular.

Although the three different examples illustrated different kinds of value as emerging from different modes of discernment, all of these forms of value were oriented more towards understanding and using resources (perhaps through more informed selection between items, or through other forms of relation) than towards retrieving resources according to precisely delineated characteristics. (When there is interpretive flexibility, it is less possible to describe resources precisely; on the other hand, when there is interpretive flexibility, it is more possible to understand resources more thoroughly and deeply—dare I say, more realistically as well.) This suggests an area of research: to use interpretive analyses of the sort demonstrated here to develop and propose metadata quality criteria that facilitate this kind of use for information systems themselves (as opposed to criteria focused around resource retrieval).

It is also noteworthy that these forms of value reveal themselves only when data is aggregated, or when different kinds of decisions are viewed together, in a pluralistic environment. The basic analyses performed here were done manually, with a small dataset. This suggests another area of potential research: to develop various sorts of tools that might facilitate this kind of analysis at scale (for example, techniques for using data mining and visualization to comprehend and similarly interpret larger datasets). Through such efforts, those of us who take a humanistic approach to information studies can demonstrate alternate goals and uses for the mechanisms associated with “data science.”

Just as importantly, this paper suggests how we in information studies might reconcile our convictions with our practices. If we can understand and characterize the mechanisms through which the editorial discernment of metadata creators produces different kinds of value in aggregated datasets, we can, perhaps, develop information systems and tools that work with our theoretical commitments, rather than against them.

About the author

Melanie Feinberg is an associate professor at the School of Information and Library Science (SILS) at the University of North Carolina at Chapel Hill. She can be contacted at mfeinber@unc.edu.

References

How to cite this paper

Feinberg, M (2017). The value of discernment: making use of interpretive flexibility in metadata generation and aggregation. Information Research, 22(1), CoLIS paper 1649. Retrieved from http://InformationR.net/ir/22-1/colis/colis1649.html (Archived by WebCite® at http://www.webcitation.org/6oVlvL4Yz)

Check for citations, using Google Scholar