David S. Roh
In 2001, I left what would eventually become a lucrative internet startup company in online tutoring services to pursue graduate studies in the humanities. It was not a difficult decision: I had grown tired of the infighting over money within the company, my undergraduate schoolwork had suffered, the commute between my college campus and downtown Los Angeles was shaving years off my life, and finally, although I could not quite articulate it at the time, I had become disenchanted by the frenetic, ahistorical, churn-and-burn pace of invention—that is, the logic of the startup.
Initially, being part of an internet startup was terribly exciting. None of us knew what we were doing exactly, but no matter, we happily made it up as we went along. This was the era of new players such as PayPal and eBay; Google was still relatively fresh; the unholy cyber-libertarian trio of Napster, Scour, and Kazaa had the Recording Industry Association of America (RIAA) in fits; we navigated the streets of Los Angeles using printouts from Yahoo! Maps; and Amazon.com was beginning to expand beyond books. Anything was possible. The internet had passed its version of the Homestead Act, in which intrepid expansionists won public land in the United States for no cost, and we were eager to claim a stake: all we needed to do was register a domain, find a host, and build a site. Starting small, my company designed websites for any client willing to entrust its web presence to a few university students, built local intranets and databases, and eventually graduated to more ambitious projects. We took on more partners and employees, leased an office near the Los Angeles County Museum of Art (LACMA) on Miracle Mile, and scheduled meetings with venture capitalists in search of the next hot property.
But the dual role strain of undergraduate student and startup member was overtaxing. A few friends ended up dropping out of school and working full time at the company. I came to my own crossroads: I could take a leave of absence from school to invest fully in the company, as two of my partners had already done, or I could focus on finishing my degree and pursue postgraduate studies. I decided to leave the company because I had become dissatisfied with the get-rich-quick mentality it had adopted. We ground through any project or venture that seemed viable, throwing ideas at the wall to see what would stick. There did not seem to be a long-term plan beyond the next project we could slap together and sell off.
In hindsight, I realize that the startup was ruled by a distinctive logic, one that stressed performativity over substance, which manifested in several ways. First, we upheld the startup stereotype, spending days on end in the office, wasting time, eating unhealthy food, and working in fits and starts. To this day, I remain uncertain as to whether we squatted in the office because we really needed to or because that is what we thought we were supposed to do as startup employees. (I doubt we were any more efficient than if we had put in a standard workday). Nevertheless, when potential investors stopped by, we appeared (and smelled) like a startup should: we were young, brash entrepreneurs in an overgrown playground, with empty boxes of Chinese takeout strewn about. Second, our projects rarely worked—or, rather, they worked just well enough to make it through a presentation. The code was ugly and inefficient, and back ends were held together with the computing equivalent of duct tape: the slightest breeze or wrong keyboard combination might cause the entire thing to crash. We compensated by draping a shiny veneer over the program (“skinning”), so that at least the aesthetics were appealing. But none of this really mattered, because as long as a project appeared to work on the surface, we could sell it to another company for completion (or, as was often the case, to be mothballed—ensuring the removal of a potential competitor from the market). We went to industry conventions in bright-orange jumpsuits, handed out flyers from a booth, and offered attendees free swag and limousine rides to hotels in hopes of demonstrating the viability of our product. In other words, we were in the business of selling potential ideas rather than genuine products.
Graduate school, with promises of long-form study and historicity, offered a safe haven from the race, where I could marry my interests in technology with literary studies. It was during graduate school that the field of digital humanities enjoyed immense growth. In addition to studies of electronic literature and digitization efforts, increasingly ambitious projects, such as processes for detecting subgeneric trends in textual corpora, began to emerge. But as thrilling as it has been to witness the explosion of DH work, I have been unsettled by an emergent phenomenon that reminds me of my experience during the dot-com bubble: an interface performativity. There is an incentive for shiny GUIs (graphical user interfaces); infrastructure lives and dies by soft money, prioritizing grantsmanship; and intellectual exchange at conferences and journals is whittled down to tweets that increasingly lean toward self-promotion: while I understand the appeal of social media immediacy, I wonder if short-form ripostes are the most effective medium for intellectual exchange. The software projects that seem to capture the public’s imagination are the ones with attractive graphics or with a quantitative or empirical slant. Winning grants has become synonymous with actual DH work, at least by administrators and, at times, even by scholars. As for the projects themselves, the situation has taken a turn for the better, as grant applications now require specifics on data management, and more conventional outlets for scholarship have begun to engage with both theory and praxis. However, the lack of attention in the past led to an accumulation of digital detritus.
There are several elements inflating what might be called the DH bubble. One derives from the field’s perceived “sexiness,” for lack of a better term, both in the administrative and popular imagination. The attraction is understandable: coverage of the field in the New York Times, Atlantic, and Ars Technica evinces an appeal that the humanities are otherwise often accused of lacking. Digital humanities’ popularity also offers a convenient solution to the question of the future of the humanities, when English and philosophy majors have been relegated to conservative talking points by politicians looking for easy targets to answer for a dearth of jobs in a struggling economy. Others have already covered why this DH-as-savior model is problematic, and it is worth noting that many digital humanists recoil at the characterization. Whether or not this is a fair charge, the perception remains that digital humanities can somehow justify the humanities’ relevance. It is an untenable position, really, and the associated pressure results in the internalizing of an entrepreneurial, performative logic that is very reminiscent of the startup logic I encountered years ago. We do not simply study or research digital humanities; we “do” DH. That “doing” comes in the form of colorful data visualizations—statistical graphs, charts, maps and network graphs; enormous text-mining operations; and programming enterprises—which masks the fact that many of the digital tools used are imperfect, easily manipulated, and quite rudimentary. That elision and concurrent performativity may be in part due to the promise of funds larger than usually available in the humanities. In a cash-poor environment, humanities departments are understandably attracted to individuals who can win grants and funnel indirect costs back to the university. But an overreliance on soft money can leave the field vulnerable to the vicissitudes of its availability; without budget lines attached to tenured or tenure-track positions, projects might find themselves in states of neglect.
In 2000, a year before I left the startup world, the internet bubble burst. The investing mania had precluded rationality; investors kept throwing capital at startups without requiring a delivery date, let alone a plan for a realistic project. But when startups began to fail to deliver, the funding dried up. I fear that something similar might occur in the digital humanities. While a comprehensive understanding of the field eludes the broader academic community, few want to be left behind. The word “digital” is affixed to everything in the humanities like the word “cyber” was positioned in the 1990s, at the dawn of the internet. When digital projects fail to generate revenue or live up to heightened expectations, however, digital humanities as a field risks falling flat. Having lived through the dot-com bubble, I am concerned about the consequences a similar bubble will have for digital humanities as a field.
As I write, I am in the midst of building a DH lab at the University of Utah, which has been very supportive of building both the space and a DH program initiative. Invested stakeholders such as the Marriott Library, College of Humanities, Architecture+Planning, and Fine Arts have been in conversation with many DH centers, labs, and programs around the country in laying the groundwork for a space on our campus. While I am interested in pursuing projects beyond digital ports of analog material, I am wary of becoming reliant on external grants. My experience suggests that the boom times in digital humanities will not last forever and that the “bust” will create problems as new DH labs and centers continue to appear without commensurate funding. For these reasons, I have attempted to be mindful of pursuing projects that are self-sustaining. This means more modest efforts, at least initially, in which the primary cost is my own human capital, rather than committing to a project requiring a team of support staff, programmers, and career-line faculty whose livelihoods depend on securing external funding. Moreover, I want to ground our projects in a critical discourse to advance conversation. For example, a virtual reality rendering of Borges’s Library of Babel may be illuminating, but it needs to exist within an infrastructure that generates scholarly research in order to separate itself from being a freestanding digital art object. Finally, I understand that a degree of performativity is expected from these centers, but I would like for that performativity to come in the form of services in humanistic training with faculty and students, as well as outreach to local communities—in our case, focusing on projects that take advantage of regional archives and Mountain West–specific concerns.
I am hopeful that we can make it work, because several key structural differences between DH centers and startups may turn out to be what protects academia from falling prey to startup logic. First, a DH center or lab does not have the same insatiable thirst for funding that a startup has, which by definition is a spare operation driven by a self-generated momentum. The startup has to generate revenue within a short span of time to justify its existence; hence, the timeline for projects and viability is much more compressed in comparison to an academic lab, which is likely integrated into the budget lines of larger entities. Second, as of yet, tenure and promotion in digital humanities do not rest on the number and size of grants awarded and projects completed. Whether this is a favorable or unfavorable matter is outside the scope of this chapter, but in either case, trends in the sciences are instructive; the grant-dependent model has in recent years driven away talented researchers from the field. Pursuing grants is not by itself any different—many non-DH individuals or centers do the same, of course—but if DH labs or centers are conceived primarily as entrepreneurial ventures that should continually win indirect costs and generate publicity through interface performativity, then we will indeed be headed toward a crash similar to the one experienced by the startup world ca. 2001. I do not doubt that many others have already anticipated a bursting bubble, even if they do not describe it as such. I am hopeful that the field will be able to take control of the narrative so that these expectations of our constantly developing “killer apps” begin to subside. To this end, a critical awareness of, and resistance to, a startup logic demanding performativity is a first step.
1. Early efforts at creating internet “neighborhoods” by Geocities took up the rhetoric of the Homestead Act. As a gerund, “Homesteading” came to mean claiming space online to create a “homepage.”
2. For a dot-com bubble postmortem, see Cassidy.
3. I include myself among the guilty.
4. Natalie Cercire notes, “It is no secret that in the past few years many administrators have come to see in digital humanities a potential stimulus package for increasingly underfunded departments like English, history, comparative literature, classics, and so on.”
5. For example, in 2010, Shakespeare Quarterly teamed up with MediaCommons to conduct an open peer review process.
6. This is not necessarily the fault of the field; it may even be endemic, since technology and platforms evolve rapidly. Perhaps it is unrealistic to expect projects to continue versioning if there is no commercial incentive or consistent funding.
7. See Cohen; Hopkins; O’Connell.
8. See Koh; Golumbia.
9. Natalia Cercire nicely captures the tension in digital humanities between theory (“yack”) and method (“hack”), with an examination of the problematic epistemological concerns of a field defined by building and “doing,” which may be reflective of a gendered, industrial mode that is exclusionary. She instead calls for digital humanities to theorize: “Yet in its best version, digital humanities is also the subdiscipline best positioned to critique and effect change in that social form—not merely to replicate it. . . . Surely such a making-sense is called for in this institutionalizing moment, and surely digital humanities itself is up to the challenge of doing it.”
10. Programs such as Voyant, Mallet, and Gephi have gained popularity with practitioners, and they can be quite useful. However, these tools are imperfect and by no means avenues for clean, consistent results. My point is that the illusion of empiricism should be punctured; there is quite a bit of value in their inherent messiness and plasticity.
11. Other models for DH research exist that neatly circumvent the problem of the unreliability of soft money. The Center for Digital Humanities at UCLA, for example, is a retooling of a legacy Humanities Computing office (HumNet) that had its own budget baked into the existing infrastructure. Miriam Posner has spearheaded and facilitated projects that do not necessarily need external funding to get off the ground. (Disclosure: I was once a work-study employee at HumNet, UCLA CDH’s predecessor.) Likewise, the Office of Digital Humanities at Brigham Young University concentrates on services for humanities departments and its students, with particular strengths in linguistics computing, stemming from its humanities computing tradition. Within that structure, BYU has the flexibility to explore nonservice-oriented DH projects.
12. The NEH Startup Grant Application asks for its projects to be open source so as to maximize accessibility and interoperability: “Projects developing new software are encouraged to make the software free in every sense of the term, including the use, copying, distribution, and modification of the software. Open-source software or source code should preferably be made publicly available through an online repository such as SourceForge or GitHub. Software should be thoroughly documented to promote its reuse” (9).
Additionally, the data management section requires a plan for sustainability and preservation: “Prepare a data management plan for the project (not to exceed two pages). The members of your project team should consult this document throughout the life of the project and beyond the grant period. The plan should describe how the project team will manage and disseminate data generated or collected by the project. For example, projects in this category may generate data such as software code, algorithms, digital tools, reports, articles, research notes, or websites” (10).
13. The Stanford Literary Lab has an admirable model of publishing “pamphlets” based on their research. The pamphlets narrate the methodological problems, inherent messiness of the data, and preliminary nature of their findings, inviting replication and further research.
14. In “A Generation at Risk,” Ronald Daniels notes that the sciences funding model tends to award intellectually conservative projects to established researchers, which leaves little room for young scientists. Consequently, the field is in danger of losing an entire generation of researchers unless the infrastructure changes.
Cassidy, John. Dot.con: How America Lost Its Mind and Money in the Internet Era. New York: Harper Perennial, 2003.
Cecire, Natalia. “Introduction: Theory and the Virtues of Digital Humanities.” Journal of Digital Humanities, August 15, 2016, http://journalofdigitalhumanities.org/1-1/introduction-theory-and-the-virtues-of-digital-humanities-by-natalia-cecire/.
Cohen, Patricia. “Humanities Scholars Embrace Digital Technology.” New York Times, November 16, 2010, http://www.nytimes.com/2010/11/17/arts/17digital.html.
Daniels, Ronald J. “A Generation at Risk: Young Investigators and the Future of the Biomedical Workforce.” Proceedings of the National Academy of Sciences of the United States of America 112, no. 2 (2015): 313–18.
Golumbia, David. “Death of a Discipline.” Differences 25, no. 1 (January 1, 2014): 156–76.
Hopkins, Curt. “Future U: Rise of the Digital Humanities.” Ars Technica, June 17, 2012, http://arstechnica.com/business/2012/06/future-u-rise-of-the-digital-humanities/.
Koh, Adeline. “A Letter to the Humanities: DH Will Not Save You.” Hybrid Pedagogy. April 19, 2015, http://www.hybridpedagogy.com/journal/a-letter-to-the-humanities-dh-will-not-save-you/.
O’Connell, Mark. “Bright Lights, Big Data.” The New Yorker, March 20, 2014, http://www.newyorker.com/books/page-turner/bright-lights-big-data.