Developing Things: Notes toward an Epistemology of Building in the Digital Humanities
STEPHEN RAMSAY AND GEOFFREY ROCKWELL
The Anxieties of Digital Work
Leave any forum on digital humanities sufficiently open, and those gathered will inevitably—and almost immediately—turn to issues surrounding credit for digital work.
It can sometimes be difficult to determine precisely what “digital work” means in the humanities, and the context in which that term is being applied can differ between scholarly but nonprofessorial positions (“alternate academic,” as it is sometimes called) and the normative concerns of tenure and promotion. Yet despite this, it is clear that the object of anxiety is becoming more focused as time goes by. There might have been a time when study of “the digital” seemed generally dissonant amid more conventional studies within history, philosophy, and literature. But in more recent times, people writing conventional books and articles about “new media” seldom worry that such work won’t count. People who publish in online journals undoubtedly experience more substantial resistance, but the belief that online articles don’t really count seems more and more like the quaint prejudice of age than a substantive critique. Increasingly, people who publish things online that look like articles and are subjected to the usual system of peer review need not fear reprisal from a hostile review committee.
There is, however, a large group in digital humanities that experiences this anxiety about credit and what counts in a way that is far more serious and consequential. These are the people—most of whom have advanced degrees in some area of humanistic study—who have turned to building, hacking, and coding as part of their normal research activity. This is the segment of contemporary digital humanities (DH) that invented the terms “humanities computing” and later “digital humanities”—the ones for whom any other common designation (game studies, media studies, cyberculture, edutech) doesn’t make as much sense. They are scholarly editors, literary critics, librarians, academic computing staff, historians, archaeologists, and classicists, but their work is all about XML, XSLT, GIS, R, CSS, and C. They build digital libraries, engage in “deep encoding” of literary texts, create 3-D models of Roman ruins, generate charts and graphs of linguistic phenomena, develop instructional applications, and even (in the most problematic case) write software to make the general task of scholarship easier for other scholars. For this group, making their work count is by no means an easy matter. A book with a bibliography is surely scholarship. Is a tool for keeping track of bibliographic data (like Zotero) scholarship? A literary critical article that is full of graphs, maps, and trees is also scholarship (if, perhaps, a little unusual). Is a software framework for generating quantitative data about literary corpora scholarship? A conference presentation about the way maps mediate a society’s sense of space is unambiguously an act of scholarship. Is making a map an unambiguous act of scholarship?
There have been both passive and active attempts to address these issues by providing guidelines for the evaluation of digital work. In one of the more notable instances of active intervention, the Modern Language Association (MLA) released guidelines for such evaluation. As laudable as such efforts have been—and it should be noted that the MLA began this work over ten years ago—the guidelines themselves often beg the question by encouraging faculty members to “ask about evaluation and support” and to “negotiate and document [their] role” (Modern Language Association). For nontenure-line faculty and staff (e.g., those working in DH research groups and centers), the problem of evaluation is at least theoretically solved by a job description; if it is your job to build things in the context of the humanities, success at that task presumably resolves the question of what counts in terms of evaluation and promotion. Grant funding, too, has functioned in recent years as a form of evaluation. Grants in excess of fifty thousand dollars are quite rare in the humanities; grants exceeding twice that amount are by no means unusual in DH. Review committees (particularly those above the department level) have been more than willing to view cash awards, often with substantial indirect cost requirements, as something that most definitely counts.
But none of this resolves the core anxiety over whether the work counts as scholarship and whether those doing such things are still engaged in humanistic inquiry. People in DH will sometimes point to the high level of technical skill required, to the intellectual nature of the pursuit, and to the plain fact that technical projects usually entail an enormous amount of work. But even as these arguments are advanced, the detractions seem obvious. Repairing cars requires a high level of technical skill; the intellectual nature of chess is beyond dispute; mining coal is backbreaking work. No one confuses these activities with scholarship. The authors of this paper have each made strong claims for building, hacking, and coding as important—and, indeed, definitional—activities within DH. Yet we are aware that, despite enthusiasm for our ideas in some quarters, neither of us has made the case for building as a form of scholarship and a variety of humanistic inquiry. Our purpose in this chapter, therefore, is to work toward a materialist epistemology sufficient to the task of defending building as a distinct form of scholarly endeavor, both in the humanities and beyond. We do not offer particular solutions to the challenge of having work in the digital humanities count in concrete institutional terms.1 Our hope is rather to understand our own practices more fully, with an eye toward strengthening the practical arguments our institutions commonly demand.
Thing Theory: Can DH Things Be Theories?
In December of 2008, Willard McCarty started a conversation on Humanist by asking whether things like the digital artifacts we build are knowledge if they aren’t accompanied by some measure of discourse: “Can any such artefact ever stand for itself wholly without written commentary and explanation?” (McCarty, Humanist). Stan Ruecker responded that he thought “we do have categories of artifacts that both reify knowledge and communicate it” (Ruecker) and quoted Lev Manovich who, at the Digital Humanities (DH) 2007 conference, got up and said something to the effect of “a prototype is a theory. Stop apologizing for your prototypes.”2
Manovich’s statement is provocative, in part, because it answers McCarty’s question by eschewing entirely the notion of an accompanying discourse. Prototypes are theories, which is to say they already contain or somehow embody that type of discourse That is most valued—namely, the theoretical. Manovich undoubtedly had in mind not the standard scientific meaning of the word “theory”—an explanation for a set of observations that can predict future observations—but something closer to the way the term is commonly used in the humanities. In the context of history or literary study, “theory” doesn’t predict, but it does explain. It promises deeper understanding of something already given, like historical events or a literary work. To say that software is a theory is to say that digital works convey knowledge the way a theory does, in this more general sense.
Alan Galey and Ruecker went on to claim, in a subsequent article, that “the creation of an experimental digital prototype [should] be understood as conveying an argument about designing interfaces” (405). In this view, certain prototypes are understood to do rhetorically what a theoretical discourse does by presenting a thesis that is “contestable, defensible, and substantive” (412). They made this argument with full awareness of the institutional consequences—namely, that “digital artifacts themselves—not just their surrogate project reports—should stand as peer-reviewable forms of research, worthy of professional credit and contestable as forms of argument” (407). It is the prototype that makes the thesis, not discursive accompaniments (to borrow McCarty’s formulation) like white papers, reports, and peer-reviewed papers. They illustrate their argument with specific examples, including text visualizations like Brad Paley’s TextArc, offering them as graphical interpretations of a text comparable to a critical or interpretative essay.
Galey and Ruecker’s vision of the explanatory power of the experimental prototype recalls the centuries-old practice of demonstration devices “deployed in public lectures to recruit audiences by using artifice to display a doctrine about nature” (Schaeffer, 157). The eighteenth-century orrery was not a scientific instrument designed for discovery but simply a tool for showing how the solar system worked—in essence, a rhetorical device that could be used in persuasive performances. Davis Baird makes the case more forcefully in Thing Knowledge where, as the title suggests, he traces the way scientific instruments convey knowledge within the scientific communities that have the training to interpret them. Baird goes further, however, and questions the privilege accorded to the discursive, even accusing us of ignoring the communicative power of the instrumental: “In the literary theater, lacking any arsenal of techniques to understand and advance instrumentation, textual analysis will have free play, while in the instrumental and technological theatre humanists will be relegated to the sidelines, carping at the ethical, social and—following the Heideggarian line of criticism—metaphysical problems of modern science and technology” (xvii). If Baird is right, then “building” may represent an opportunity to correct the discursive and linguistic bias of the humanities. According to this view, we should be open to communicating scholarship through artifacts, whether digital or not. It implies that print is, indeed, ill equipped to deal with entire classes of knowledge that are presumably germane to humanistic inquiry.
The problem with this explanatory approach to digital artifacts is that it doesn’t apply to the most problematic—and, perhaps, the most ubiquitous—category of digital tools: namely, those tools that digital humanists develop for others to use in the ordinary course of their work as scholars. Such tools are celebrated for their transparency or, as Heidegger puts it, for the way they stand—like hammers or pencils—ready to hand. Such tools, far from being employed on the center stage in a performative context, are only noticed when they break down or refuse to work transparently. Such tools don’t explain or argue but simply facilitate. Galey and Ruecker get around this in their discussion by focusing on experimental prototypes, which are not tools meant to be used. They imagine a tool like TextArc to be a visualization tool that makes an argument about interface but not an argument about the text it visualizes. They believe that TextArc’s visualizations “are not really about Hamlet or Alice in Wonderland or its other sample texts; they are about TextArc’s own algorithmic and aesthetic complexity” (419). While this might be true, it almost disqualifies the tool from being transparent or ready to hand. A digital artifact that transparently shows you something else might convey knowledge, but it doesn’t intervene as an explanation or argument; it recedes from view before that which is represented. Where there is argument, the artifact has ceased to be a tool and has become something else. This other thing is undoubtedly worthy and necessary in many cases, but it resolves the question of whether “building is scholarship” by restricting building to the creation of things that, because they are basically discursive, already look like scholarship.
The Digital as a Theoretical Lens or Instrument
A second way to think of digital artifacts as theories would be to think of them as hermeneutical instruments through which we can interpret other phenomena. Digital artifacts like tools could then be considered as “telescopes for the mind” that show us something in a new light. We might less fancifully consider digital artifacts as “theory frameworks” for interpreting, in the same way that Jonathan Culler views Foucault’s theoretical interventions.
As with prototypes, there is a history to this view. Margaret Masterson in “The Intellect’s New Eye,” an essay in the Times Literary Supplement of 1962, argued that we should go beyond using computers just to automate tedious tasks. Using the telescope as an example of a successful scientific instrument, she argued for a similar artistic and literary use of computing to “see differently.” One recalls, in this connection, Steve Jobs’s early presentation of the Macintosh as a “bicycle for the mind” that would (in an ad campaign some twenty years later) allow us to “Think different.” Such analogies, even in their less strident formulations, reinforce the suggestion that digital artifacts like text analysis and visualization tools are theories in the very highest tradition of what it is to theorize in the humanities, because they show us the world differently.
One of us has even argued that visualization tools work like hermeneutical theories (Rockwell). A concordancing tool, for example, might be said to instantiate certain theories about the unity of a text. The concordance imagines that if you gather all the passages in the text that contain a keyword, the new text that results from this operation will be consistent with the author’s intentions. Whatever other work it might perform, it is informed by this basic theoretical position. While we might regard such a position as crude in comparison to more elaborate theories of discourse, it is not hard to imagine tools that instantiate subtler theories deliberately and self-consciously (and perhaps more in keeping with present theoretical preoccupations). Further, digital instruments work a lot faster. Reading Foucault and applying his theoretical framework can take months or years of application. A web-based text analysis tool could apply its theoretical position in seconds. In fact, commercial analytical tools like Topicmarks tell you how much time you saved (as compared to having to read the primary text).3
We are, of course, uncomfortable thinking of theories in this highly utilitarian manner. Yet there is a tradition of philosophical pragmatism in which theories are thought of quite explicitly as tools to be used. As William James famously said, “Theories thus become instruments, not answers to enigmas, in which we can rest” (98). He wanted philosophical theories not just to explain things but to be useful and argued that the way to judge theories was to assess their usefulness or their “cash value.” This instrumental view is argued historically by John Dewey in works like Reconstruction in Philosophy:
Here it is enough to note that notions, theories, systems, no matter how elaborate and self-consistent they are, must be regarded as hypotheses. They are to be accepted as bases of actions which test them, not as finalities. To perceive this fact is to abolish rigid dogmas from the world. It is to recognize that conceptions, theories and systems of thought are always open to development through use…. They are tools. As in the case of all tools, their value resides not in themselves but in their capacity to work shown in the consequences of their use. (145)
Dewey was not a mean instrumentalist. He believed that it was necessary to reconstruct philosophy (and by extension the humanities) so that it could guide action rather than just become the solace of pedants. He wanted theories to be useful instruments for living, not the high mark of scholarship. It is by no means idly speculative to imagine that he would have recognized that computers, by virtue of their ability to automate processes, could thus instantiate theories at work. He almost certainly would not have objected to the idea of a theory packaged as an application that you can “run” on a phenomenon as an automatic instrument (provided we remained open to the idea that such theories might cease to be useful as the context changed).
If highly theorized and self-reflective visions of tools as theories fail to be sufficiently tool-like, one might say that so-called thing theories of the instrumental sort outlined here err in the opposite direction by being insufficiently open about their theoretical underpinnings. A well-tuned instrument might be used to understand something, but that doesn’t mean that you, as the user, understand how the tool works. Computers, with chains of abstraction extending upward from the bare electrical principles of primitive XOR gates, are always in some sense opaque. Their theoretical assumptions have to be inferred through use or else explained through the very stand-in documentation that we are trying to avoid treating as a necessary part of the tool. For Baird, the opacity of instruments isn’t a problem; it is simply part of how scientific instruments evolve in the marketplace. Early instruments might demonstrate their working (as they certainly did in the case of early computer equipment), but eventually they get boxed up and made available as easy-to-use instruments you can order—effectively installing the user at a level of abstraction far above whatever theoretical claims might lie beneath.
But the understanding of underlying theoretical claims is the sine qua non of humanistic inquiry. For tools to be theories in the way digital humanists want—in a way that makes them accessible to, for example, peer review—opacity becomes an almost insuperable problem. The only way to have any purchase on the theoretical assumptions that underlie a tool would be to use that tool. Yet it is the purpose of the tool (and this is particularly the case with digital tools) to abstract the user away from the mechanisms that would facilitate that process. In a sense, the tools most likely to fare well in that process are not tools, per se, but prototypes—perhaps especially those that are buggy, unstable, and make few concessions toward usability. One could argue that the source code provides an entry point to the theoretical assumptions of black boxes and that the open-source philosophy to which so many digital humanists subscribe provides the very transparency necessary for peer review. But it is not at all clear that all assumptions are necessarily revealed once an application is decompiled, and few people read the code of others anyway. We are back to depending on discourse.
The Digital as a Theoretical Model
Concern with the source might lead us toward formal definitions of “computation” as put forth in the context of computer science. The most minimal definition of a computer—the one that typically reigns within the rarefied fields of information theory and theory of computation—considers a computer to be any mechanism that transforms information from one state to another. Such terse conceptions aim to be widely inclusive but, like tool as prototype, often end up excluding much that we would want to acknowledge in a definition of computing as a sociological activity (whether for DH or for computer science more generally). More expansive definitions that try to allow for a wide range of activities are common. Here is a recent one from an article that begins with the question, “What is the core of computing?” (Isbell, 195):
In our view, computing is fundamentally a modeling activity. Any modeler must establish a correspondence between one domain and another. For the computational modeler, one domain is typically a phenomenon in the world or in our imagination while the other is typically a computing machine, whether abstract or physical. The computing machine or artifact is typically manipulated through some language that provides a combination of symbolic representation of the features, objects, and states of interest as well as a visualization of transformations and interactions that can be directly compared and aligned with those in the world. The centrality of the machine makes computing models inherently executable or automatically manipulable and, in part, distinguishes computing from mathematics. Therefore, the computationalist acts as an intermediary between models, machines, and languages and prescribes objects, states, and processes. (198)
The idea of computing in the humanities as a modeling activity has been advanced before, most notably by Willard McCarty, who notes “the fundamental dependence of any computing system on an explicit, delimited conception of the world or ‘model’ of it” (Humanities Computing, 21). For McCarty, such notions help to establish a critical discourse within DH by connecting humanistic inquiry to the vast literature and philosophy of science, where “modeling has been a standard method for a very long time.”
The question for those who would understand building as a scholarly activity, though, is not whether understanding the world can be rectified with the terms of humanistic inquiry; clearly, it can. If computers can be enlisted in the task of gaining this understanding, then a failure to welcome methodologies that substantially employ computational methods is surely a reactionary stance. Moreover, since one would expect the results of that methodology to appear as a “visualization of transformations and interactions,” the normal output of computational methods is similarly unproblematic. We may reject the understanding of the world put forth by a humanist using computational methods and even point to the output of those methods as evidence of insufficiency or error; but, if digital humanities is about using computers to provide robust interpretations of the world (however contingent, provisional, and multiple), then it is manifestly not incommensurable with humanistic practice.
The question, rather, is whether the manipulation of features, objects, and states of interest using the language of coding or programming (however abstracted by graphical systems) constitutes theorizing. And here, the nature of the problem of building reveals itself most fully as a kind of category error. To ask whether coding is a scholarly act is like asking whether writing is a scholarly act. Writing is the technology—or better, the methodology—that lies between model and result in humanistic discourse. We have for centuries regarded writing as absolutely essential to scholarship. We esteem those who write much, penalize those who write little, and generally refer to the “literature” when evaluating the state of a discourse. But in each case, we speak metaphorically. We do not mean to propose that the act of putting words on a page is scholarship. We seek, instead, to capture metonymically the quality of the intervention that has occurred as a result of the writing. Scholars conceive the world and represent it in some altered form. That writing stands as the technical method by which this transformation is made is almost beside the point. One recalls, in this context, Marshall McLuhan’s gnomic observation that the “medium is the message”—that the message of any medium or technology is the “change of scale or pace or pattern that it introduces into human affairs” (8).
Yet this analogy falters on a number of points. The act of putting words on a page (or finished works considered in isolation from readers) may not be scholarship in some restricted sense, but the separation between writing, conceiving, and transforming is hardly clear-cut. That one might come to understand a novel or an issue or a problem through the act of writing about it forms the basic pedagogy of the humanities. We assign students writing not merely to provide us with evidence that they have thought about something but rather to have that thinking occur in the first place.
In discussing this issue, then, we may borrow the phrase that inaugurated the philosophical discourse of computing. As Alan Turing proposed “What happens when a machine takes the part of [a human interlocutor] in this game?” as a replacement for “Can machines think?” (434), so may we substitute “What happens when building takes the place of writing?” as a replacement for “Is building scholarship?” The answer, too, might be similar. If the quality of the interventions that occur as a result of building are as interesting as those that are typically established through writing, then that activity is, for all intents and purposes, scholarship. The comparison strikes us as particularly strong. In reactions to the Turing test, one may easily discern a fear of machine intelligence underlying many of the counterarguments. It is neither unfair nor reductionist to suggest that fear of an automated scholarship—an automatic writing—informs many objections to the act of building and coding within the humanities. But even if that were not the case, it would still fall to the builders to present their own activities as capable of providing affordances as rich and provocative as that of writing. We believe that is a challenge that the digital humanities community (in all its many and varied forms) should accept and welcome.
NOTES
1. For those interested in the institutional evaluation of digital work, the Modern Language Association maintains a wiki of advice at http://wiki.mla.org/index.php/Evaluation_Wiki.
2. See also Galey, Ruecker, and the INKE team, “How a Prototype Argues.”
3. Topicmarks, http://topicmarks.com.
BIBLIOGRAPHY
Baird, Davis. Thing Knowledge: A Philosophy of Scientific Instruments. Berkeley: University of California Press, 2004.
Culler, Jonathan. Literary Theory: A Very Short Introduction. Oxford: Oxford University Press, 2000.
Davidson, C. N. “Data Mining, Collaboration, and Institutional Infrastructure for Transforming Research and Teaching in the Human Sciences and Beyond.” CTWatch Quarterly 3, no. 2 (2007). http://www.ctwatch.org/quarterly/articles/2007/05/data-mining-collaboration-and-institutional-infrastructure/.
Dewey, John. Reconstruction in Philosophy. Enlarged ed. Boston: Beacon, 1948.
Galey, Alan, Stan Ruecker, and the INKE team. “How a Prototype Argues.” Literary and Linguistic Computing 25, no. 4 (2010): 405–24.
Heidegger, Martin. Being and Time. Translated by Joan Stambaugh. Albany: State University of New York Press, 1953.
Isbell, Charles L., et al. “(Re)Defining Computing Curricula by (Re)Defining Computing.” SIGCSE Bulletin 41, no. 4 (2009): 195–207.
James, William. “What Pragmatism Means.” In Pragmatism: A Reader, edited by Louis Menand. New York: Vintage, 1997.
Masterson, Margaret. “The Intellect’s New Eye.” Times Literary Supplement, vol. 284, April 27, 1962.
McCarty, Willard. “22.403: Writing and Pioneering.” Humanist Discussion Group. http://www.digitalhumanities.org/humanist/.
———. Humanities Computing. New York: Palgrave, 2005.
McLuhan, Marshall. Understanding Media: The Extensions of Man. Boston: MIT Press, 1994.
Modern Language Association. “Guidelines for Evaluating Work with Digital Media in the Modern Languages.” http://www.mla.org/guidelines_evaluation_digital.
Paley, Bradford. TextArc. http://www.textarc.org.
Rockwell, Geoffrey. “The Visual Concordance: The Design of Eye-ConTact.” Text Technology 10, no. 1 (2001): 73–86.
Ruecker, Stan. “22.404: Thing Knowledge.” Humanist Discussion Group. http://www.digitalhumanities.org/humanist/.
Schaffer, Simon. “Machine Philosophy: Demonstration Devices in Georgian Mechanics.” Osiris 9 (1994): 157–82.
Turing, Alan. “Computing Machinery and Intelligence.” Mind: A Quarterly Review of Psychology and Philosophy 59, no. 236 (1950): 433–60.