Steven E. Jones
Cyberspace is everting, as author William Gibson has repeatedly said, turning inside out and leaking out into the physical world. When he coined the term in the early 1980s cyberspace was a metaphor for the global information network, but in the decade that followed, it made a material difference in technology and culture and in the perceived relation between the two. Now, as Gibson and others have recently noted, the term has started to fray around the edges, has begun to sound quaintly archaic and to fade from use.1 In one sense, Gibson is just overwriting his earlier metaphor (cyberspace) with a new one (eversion). As he has a character say in the 2007 novel Spook Country, there never was any cyberspace, really. It was just a way of understanding the culture’s relation to networked technology (Gibson, 64).
But I think the new term, eversion, articulates something significant about a recent shift in the collective understanding of the network: from a world apart to a part of the world, from a transcendent virtual reality to a ubiquitous grid of data that we move through every day.2 We can roughly date the shift—or at least the widespread dawning recognition of it—to 2004–2008. At that moment the quintessential virtual world, Second Life, peaked and began to decline in terms of number of users and the publicity surrounding it (Heath and Heath). At the same time, Nintendo’s motion-control Wii was introduced in 2006, helping to usher in the era of mixed-reality casual gaming. So-called Web 2.0 social-network platforms, especially Facebook, were first introduced in 2004–2005, but came into their own, reaching a mass user base in 2006–2007. These platforms depended on the massive increase in the use of mobile technologies at the time. Apple’s iPhone was previewed in 2006 and introduced in January 2007; the Android OS followed later that year. William Gibson’s novel Spook Country, in which he first articulated the eversion of cyberspace, was published early in 2007. Set in 2006, its story is based on the confluence of augmented reality, locative art, viral marketing, pervasive surveillance, and the security state in the wake of 9/11 and the wars in Afghanistan and Iraq. Characters in the novel execute works of art (and engage in one direct-action protest) by leveraging the cellular data networks, GPS satellite data, and the mobile and wireless web to tag or annotate the physical world, overlaying locations with data of various kinds, including 3D artistic visualizations. (Everyone in the book still flips their cellphones open and closed, however, rather than poking at a multitouch interface, a telling detail that dates the writing to the just-pre-iPhone era.) The novel presents a media landscape in which the mundane has triumphed over the transcendent, but it is a mundane with a difference, and the difference is networked data. There is no cyberspace out there, because the network is down here, all around us.
The condition Gibson writes about corresponds to a shift noted by a number of media studies specialists working in different disciplines, what Katherine Hayles has identified as a fourth phase in the history of cybernetics, a shift from “virtuality” to “mixed reality,” to “environments in which physical and virtual realms merge in fluid and seamless ways” (“Cybernetics,” 147–48). In 2006, Adam Greenfield used terms much like Gibson’s to describe what he called “everyware,” ubiquitous or pervasive computing that, as Greenfield says, offers a radical alternative to “immersing a user in an information-space that never was”—and amounts to “something akin to virtual reality turned inside out” (Everyware, 73). More recently, Nathan Jurgenson has argued against “digital dualism,” the fallacy that “the digital and the physical are separate,” asserting instead that “the digital and physical are increasingly meshed” in augmented reality (“Digital Dualism versus Augmented Reality”). These observations by authors with very different perspectives reflect a broader cultural change whose effects we are still experiencing, a multiplatform shift in the nature of our relation to networked technologies. It is not that (to borrow from Virginia Woolf) on or about December 2006 the character of the network changed. Nothing that sudden and clear-cut took place. But I do think that between about 2004 and 2008, the cumulative effect of a variety of changes in technology and culture culminated in a new consensual imagination of the role of the network in relation to the physical and social world. The network was everting.
And at about that same moment, the digital humanities rather suddenly achieved a new level of public attention, emerging out of a decades-long tradition of humanities computing and marked by the term “digital humanities” itself—which was used in the title of a prominent collection published in 2004 and reached a kind of critical mass, in terms of public awareness and institutional influence, between 2004–2008.3 While the earlier established practices of humanities computing continued, the new-model digital humanities emphasized, for example, the analysis and visualization of large datasets of humanities materials, including what Franco Moretti named “distant reading” (Graphs, Maps, Trees, 1), engaged in coding and building digital tools and websites and archives as well as wearable processors and other devices, and responded to the “spatial turn” (Dear et al., 229, 238) across the disciplines with data-layered “thick mapping” projects.4 It also increasingly turned its attention to new media and, in particular, owed a greater debt than has been fully recognized to video games and game theory. These new practices and areas of interest for computing in the humanities correspond to changes associated with the eversion of cyberspace in the culture at large. In one sense, the new digital humanities is humanities computing, everted.
Digital humanities, in its newly prominent forms, is both a response to and a contributing cause of the wider eversion, as can be glimpsed in the substitution performed at a crucial moment (in titling a collection of essays) from digitized to digital humanities; the intention was to avoid the reductive definition of DH as mere digitization (Kirschenbaum, “What Is Digital Humanities,” 5). The term also reflected a larger change: from implying a separation between the stuff of the humanities—manuscripts, books, documents, maps, works of art of all kinds, other cultural artifacts—and computing, to more of a mixed reality, characterized by two-way interactions between the two realms, physical artifacts and digital media. Instead of only digitizing the archives of our cultural heritage in order to move them out onto the network (though that work continued, of course), many practitioners began to see themselves putting the digital into reciprocal conversation with an array of cultural artifacts, the objects on which humanistic study has historically been based and new kinds of objects, including born-digital artifacts. In new media, this kind of reciprocal interaction between data and artifacts, algorithm and world, has been effectively modeled for decades in video games.
The Eversion of the Network
First, I want to revisit cyberspace. Combining “cybernetics” and “space,” William Gibson coined the term in a 1982 short story, “Burning Chrome” (as an imaginary brand name for a network device set in the 2030s), but it became famous in his 1984 cyberpunk novel Neuromancer. He later said that his vision of cyberspace—a disembodied virtual reality, a transcendent world made up of “clusters and constellations of data. Like city lights receding”—was inspired by watching arcade video game players as they leaned into their machines, bumping the cabinets and hitting the buttons. Gibson—who was not himself a gamer—imagined that the gamers were longing to be immersed in and to disappear into the virtual world on the other side of the screen, longing to transcend the body in physical “meat space” and be uploaded as pure consciousness into the digital matrix of cyberspace.5 Thus Norbert Wiener’s cybernetics, which was etymologically about “steerage” or human control of machines, was mutated to suggest a willing relinquishment of the bodily and the material in order to go to another place, another plane.6 As Katherine Hayles has said, Gibson created cyberspace by “transforming a data matrix into a landscape”—a place apart from the physical world—“in which narratives can happen” (Hayles, How We Became Posthuman, 38). This newly three-dimensional place, which Gibson characterized from the beginning in idealist terms as “a consensual hallucination,” looked like a glowing abstract grid, as seen in the 1982 film TRON, for example, where, as in Plato’s world of Forms, the contingencies of material reality and the body have been burned away, sublimated into green and amber phosphor.
For almost two decades most popular notions of the network were cyberspatial in their underlying assumptions. For example, it was often taken for granted that the ultimate goal of users interfacing with the network was total immersion, meaning the loss of body-consciousness as one disappeared into the digital world on the other side of the screen. Only imperfect technology stood in the way. This assumption owed much to 1980s and 1990s experiments in virtual reality in which a helmet or wraparound goggles replaced the physical sensorium as the user literally buried her head in cyberspace. Some of these early environments were in fact directly inspired by Gibson’s vision of cyberspace. Hayles has said that his novels “acted like seed crystals thrown into a supersaturated solution” (How We Became Posthuman, 35).
But in the first decade of our new century, as I have said, Gibson overwrote his own metaphor, first and most explicitly in 2007’s Spook Country. Thirty years after inventing cyberspace, he imagines a journalist, a curator, and a locative artist sitting in a booth in the restaurant of the Standard Hotel in Los Angeles, discussing new media and observing that in 2006 (when the story is set) cyberspace “is everting,” turning inside out and flowing out into the world (Spook Country, 20). The artist dates the beginning of the change from May 1, 2000, when the U.S. government turned off Selective Availability to GPS satellite data, making it available to the general public, not just the military. Google Maps (the API for which was released in June 2005) and improved automobile navigation systems were the most immediate and widely experienced results. In the decade that followed, with the marked increase in the use of mobile devices and other pervasive processors and sensors, a cluster of activities emerged, circulating from artists’ and hackers’ subcultures to mainstream awareness and back again, practices that are still evolving: geocaching, hyperspatial tagging or spatially tagged hypermedia, locative installation art based on augmented reality, all overlapping with a larger trend, the pervasive use of embedded RFID (radio frequency identification) chips and NFC (near-field connection) chips and other markers, such as QR codes, on everyday physical objects. In 2012, Google announced the Google Glass project, an augmented-reality application using glasses containing location-aware networking technology and a heads-up augmented reality display. (The Google Glass prototype was suspended indefinitely in January 2015, at around the same time that Microsoft announced its own HoloLens project.) These developments emerged from work in ubicomp (ubiquitous computing) or the Internet of Things (see Weiser; Greenfield; Sterling). All involve bringing together the data grid with objects in the physical and social world—not leaving the one behind to escape into the other but deliberately overlayering them, with the expectation that users will experience the data anywhere, everywhere, while moving through the world—and mobility is a key feature of the experience.7 By definition, such technologies afford dynamic hybrid experiences, taking place at the shifting border where digital data continually meets physical reality as the user moves out into and through the world and its objects. In Spook Country, a GIS-trained hacker who facilitates locative art projects explains that once cyberspace everts, “then there isn’t any cyberspace,” that in fact “there never was, if you want to look at it that way. It was a way we had of looking where we were headed, a direction. With the grid, we’re here. This is the other side of the screen. Right here” (64).
The Emergence of the (New) Digital Humanities
It is the process of moving from one dominant metaphor to another, a direction or trajectory, from cyberspace out into the data-saturated world, that characterizes our sometimes tense and ambivalent relationship to technology at the moment. That is why I value the figure of eversion, a term for a complex process of turning. As a metaphor, eversion calls attention to the messy and uneven status of that process, the network’s leaking, spilling its guts out into the world. The process is ongoing, and the results continue to complicate our engagements with humanities archives and new media. It is an often disorienting experience, like looking at a Klein bottle, affording a sense of newly exposed overlapping dimensions, of layers of data and cultural expression combining with the ambient environment via sensors and processors, with obvious attendant risks to privacy and civil liberties. This complex sense of promise and risk applies as well to the changing infrastructural networks of traditional as well as new digital humanities practices. Ian Bogost has challenged the humanities to turn itself outward, toward “the world at large, towards things of all kinds and all scales” (“Beyond the Elbow-patched Playground”). Indeed, that is the general direction of the digital humanities in the past decade, as the infrastructure of humanities practices, from teaching and research to publishing, peer review, and scholarly communication, is increasingly being turned inside out and exposed to the world. In that sense, the larger context of the eversion provides a hidden (in plain sight) dimension that helps to explain what all the fuss is about, as first documented for many outside the field in William Pannapacker’s 2010 declaration in his Chronicle of Higher Education blog that digital humanities was “the next big thing,” or in the coverage of “culturomics” and new digital humanities work in the “Humanities 2.0” series in The New York Times (2010–2011).8
The eversion provides a context as well for some debates happening within the digital humanities. For example, if the eversion coincides with the rise of the digital humanities in the new millennium, the increased emphasis on layerings of data with physical reality can help to distinguish aspects of the new-model digital humanities from traditional humanities computing. The two are clearly connected in a historical continuum, but the changes in the past decade open up a new focus and new fields of activity for digital humanities research. Digital humanities scholars have responded to the eversion as it has happened (and continues to happen). This is reflected on many fronts, including work with (relatively big) data, large corpora of texts, maps linked to data via GIS, and the study and archiving of born-digital and new-media objects. All of this was in the air, as they say, at the very moment the digital humanities emerged into public prominence. Simple juxtapositions are suggestive: Franco Moretti’s influential book, Graphs, Maps, Trees: Abstract Models for a Literary History, was published in 2005, the same year that the Alliance of Digital Humanities Organizations (ADHO) was founded—and the same year the Google Maps API was released. The open-access online journal Digital Humanities Quarterly (DHQ) first appeared in 2007, the year of the iPhone, the publication of Gibson’s Spook Country, and the completion of Kirschenbaum’s Mechanisms (which was published in 2008). The NEH office dedicated to the field and its funding was established in 2008, but this was after a two-year staged development process. That same year, 2008, the first THATCamp “unconference” was sponsored by the Center for History and New Media at George Mason University. These juxtapositions have nothing to do with technological determinism. They are just meant to suggest that the emergence of the new digital humanities is not an isolated academic phenomenon. The institutional and disciplinary changes are part of a larger cultural shift, a rapid cycle of emergence and convergence in technology and culture.
Father Roberto Busa, S. J., who is routinely cited as the founder of humanities computing and text-based digital humanities for his work with computerized lexical concordances, wrote in 2004, in his foreword to the groundbreaking Companion to Digital Humanities, that humanities computing “is precisely the automation of every possible analysis of human expression . . . in the widest sense of the word, from music to the theater, from design and painting to phonetics.”9 Although he went on to say that its “nucleus remains the discourse of written texts,” the capaciousness of “every possible analysis of human expression” should not be overlooked, especially in the context of the moment in which it was published (xvi). Rather than divide the methodological old dispensation from the new in ways that reduce both (such as differentiating humanities computing from studies of new media or as merely “instrumental” from more “theoretical” approaches), we would do better to recognize that changing cultural contexts in the era of the eversion have called for changing methods and areas of emphasis in digital humanities research.
In that light, it is clear that some of the newer forms of supposedly practical or instrumental digital humanities, which are central to the new DH, were produced in the first place by younger scholars working with a keen awareness of the developments I am grouping under the concept of the eversion, and with a sense of what these changes meant at the time for various technology platforms of interest to academic humanities. In the era of social networks, casual gaming, distributed cognition, augmented reality, the Internet of Things, and the geospatial turn, one segment of new digital humanities work took a hands-on, practical turn, yes (“more hack, less yack,” as the THATCamp motto goes), but arguably based on theoretical insight as a kind of deliberate rhetorical gesture—a dialectical countermove to the still-prevailing idealisms associated with the cyberculture studies of the 1990s. Much of the practical digital humanities work during the decade that followed, which formed an important core of the newly emergent DH, was undertaken not in avoidance of theory or in pursuit of scientistic positive knowledge or enhanced instrumentality, but against disembodiment, against the ideology of cyberspace. The new digital humanities more often than not worked to question “screen essentialism” (Montfort, “Continuous Paper”), the immateriality of digital texts, and other reductive assumptions, including romantic constructions of the network as a world apart, instead emphasizing the complex materialities of digital platforms and digital objects. New digital humanities work—including digital forensics, critical code studies, platform studies, game studies, not to mention work with linguistic data and large corpora of texts, data visualization, and distant reading—is a collective response by one segment of the digital humanities community to the wider cultural shift toward a more worldly, layered, hybrid experience of digital data and digital media brought into direct contact with physical objects, in physical space, from archived manuscripts to Arduino circuit boards.
In this context, the digital humanities looks like a transitional set of practices at a crucial juncture, moving between, on the one hand, old ideas of the “digital” and the “humanities” and, on the other hand, a new mixed-reality humanities, worldly in a complicated way, mediating between the physical artifacts and archives on which humanities discourse has historically been built and the new mobile and pervasive digital networks that increasingly overlay and make those artifacts into “spime”-like things, encountered via multilayered interfaces.10 Gibson remarked in an interview that “the eversion continues to distribute itself.”11 That distribution is inevitably uneven and not always well understood. One job for the digital humanities in the present moment might be consciously to engage with, to help make sense of, and to shape the dynamic process of that ongoing eversion (and its distribution) out in the world at large.
The Example of Video Games
Given the role of games in the history of computing, it should come as no surprise that humanities computing and digital humanities work have involved games and gamelike environments, from early MUDs and MOOs to the experimental Ivanhoe game developed at the University of Virginia (the work of important DH scholars Johanna Drucker, Jerome McGann, Bethany Nowviskie, Stephen Ramsay, and Geoffrey Rockwell, among others), to Matthew Kirschenbaum’s inclusion of video games as among the objects of his digital-forensics approach (2008) and the project on Preserving Virtual Worlds involving Kirschenbaum and others (McDonough et al.). This is not to mention explicit video game studies by specialists in information studies, new media and digital media, or electronic literature, not all of whom always see themselves as working in digital humanities but whose work has unquestionably contributed to the field.
Video games are among the most prominent and influential forms of new media today, and the study of games as new media can be situated at the other end of the spectrum from more traditional text-based humanities computing. But it is important to recognize that continuous spectrum. Games are potentially significant cultural expressions, worthy of study in their own right, and digital humanities approaches, alongside approaches from other fields and disciplines, have much to contribute to that study. But, to turn the relationship around, games are also central to the fundamental concerns of the digital humanities in the present moment on a structural and theoretical level. Video games have much to teach the digital humanities because they are algorithmic, formally sophisticated systems that model in particular ways the general dynamics of the eversion. Games are designed to structure fluid relationships, between digital data and the gameworld, on the one hand, and between digital data and the player in the physical world, on the other hand. A number of fictional works have looked at this crossover aspect of video games, their role as models of the multidimensional relation of data and the world, including David Kaplan and Eric Zimmerman’s short film PLAY (2010), Ernest Cline’s novel Ready Player One (2011), and Neal Stephenson’s novel Reamde (2011), along with theoretical game studies by Jane McGonigal, Ian Bogost (2011), or Mary Flanagan (2009). McGonigal, who is the creator of several of the most influential cross-platform ARGs (Alternate Reality Games)—played collectively across the Internet, phone lines, television and other media, and real-world settings using GPS coordinates to locate clues revealed on websites, on TV, or in film trailers—argues that we should apply the structures of games to real-world personal and social problems (Reality Is Broken). As a result, she has been accused of indirectly abetting the “gamification” trend, most notoriously associated at first with Facebook games like Zynga’s Farmville, which critics see as colonizing players’ everyday lives for commercial profit by reductive, exploitative, and addictive games blatantly designed according to principles of operant conditioning (Bogost, “Gamification Is Bullshit” and “Reality Is Alright”). Gamification, Ian Bogost says, is really just a kind of “expolitationware.” But even it can be seen as responding to larger changes in media and culture. It is significant that the underlying premise shared by both McGonigal’s world-saving games and crass gamification—and shared as well by critics of gamification—is that video games are now “busting through to reality” as never before—as developer Jesse Schell said in one notorious talk—crossing over from the gameworld to the player’s real world (“Design outside the Box”). In other words, in its own unwitting way, gamification is yet another sign of the eversion.
Cyberspace was always gamespace in another guise—gamespace dis-placed. Not only was Gibson inspired by arcade gamers when he came up with the concept, he also interpreted the gamers’ desires in terms of popular misconceptions about the effects and motivations of playing video games, in an example of what Katie Salen and Eric Zimmerman have called the “immersive fallacy,” the assumption that the goal of any new media experience is to transport the user into a sublime and disembodied virtual world. On the contrary, Salen and Zimmerman argue, most gaming has historically taken place at the interface of player and game, the boundary of physical space and gamespace, where heads-up displays, controllers and peripheral devices, and social interactions are part of the normal video game experience. Salen and Zimmerman see a “hybrid consciousness,” a sense of being simultaneously in the gameworld and in physical reality, as the norm, not the supposed “pining for immersion” that many assume is driving the experience (Rules of Play, 458, 451–55). However deeply engaged players become, however riveted their attention, the experience of gameplay has always been more mixed reality than virtual reality. In other words, the relation of gamer to gameworld is more cybernetics than cyberspace, literally more mundane than has been imagined by many, especially many nongamers.
In the past decade, a major development in gaming has borne out this multilayered view of digital media in general and has undermined the cyberspatial ideology of total immersion: what game theorist Jesper Juul calls a “casual revolution.” Nintendo’s Wii console, introduced in 2006, led the way by tapping into the mass market of first-time gamers or nongamers and shifting attention by design from the rendering of realistic, 3D virtual gameworlds to the physical and social space of the player’s living room (Jones and Thiruvathukal). The Wii is all about the mixed-reality experience of using a sometimes kludgy set of motion-control peripherals, connected in feedback loops that turn the living room into a kind of personal area network for embodied gameplay. It is that hybrid space where Wii gameplay takes place—with a TV but also a coffee table in it, and perhaps other people playing along, as well as various peripherals beaming data to and from the console—not some imaginary world on the other side of the screen. When Microsoft’s Kinect appeared in 2010, it was marketed as a gadget-free, transparent version of a somatic motion-control interface. It actually works, however, by taking the sensor system’s gadgets out of the user’s hand (or out from under her feet) and placing them up by the screen, looking back out at the room. In practice, Kinect play is very much like Wii play in its focus on the player’s body moving around in the living room. A flood of hacks and homebrew applications for Kinect have focused on it not as a virtual-reality machine but as a system for connecting digital data and the physical world.
In this regard the Wii and Kinect, and mobile and casual gaming in general, have only reemphasized a fundamental aspect of all digital games. Writing about text-based adventure games and interactive fiction, generically among the earliest examples of computer games, Nick Montfort has said that the two fundamental components of such games are the world model—“which represents the physical environment of the interactive fiction and the things in that environment”—and the parser—“that part of the program that accepts natural language from the interactor and processes it” (Twisty Little Passages, ix). Although he is careful not to extend this model to video games in general, it offers an important general analogy. All computer games are about the productive relationship of algorithmically processed data and imagined world models, which include representations of place (maps, trees) and artifacts (weapons, tools, other inventory). One plays in collaboration or competition with other players, nonplayer characters, or the “artificial intelligence” (in the colloquial sense) that is the overall design of the game, negotiating between the two: data and world. At the same time, one plays from an embodied position in the real physical world. That betweenness is the condition of engaged gameplay, the “hybrid consciousness” that Salen and Zimmerman refer to. Even a game with an apparently immersive gameworld, whether realistically rendered (e.g., Skyrim) or iconically rendered (e.g., Minecraft), is played between worlds, at the channels where data flows back and forth in feedback loops. That is why heads-up displays, representing maps and inventories and stats of various kinds, and other affordances of gaming persist, not to mention discussion boards, constantly revised Wikipedia articles, and other paratextual materials surrounding gameplay, even for games that emphasize the immersive beauties or sublimities of their represented gameworlds. The digital humanities could do much worse than to look to games for examples of complex mixed-reality systems responding to the contingencies of the network at the present moment. It is hard to think of a more widely distributed and widely experienced set of dynamic models of the larger process of eversion than video games. The network does not evert by itself, of course. It is not really turning itself inside out. That requires human agency, actors out in the world, just as games require players and just as digital humanities research requires scholar-practitioners, working in the channels of the eversion, where the data network meets the world in its material, artifactual particulars.
My thanks to peer reviewers for their very helpful comments on an earlier draft of this essay, including Tanner Higgin, Dave Parry, Jentery Sayers, and Claire Warwick, as well as to the participants in my spring 2012 graduate seminar, English 415; the collective Tumblr created for that class is available here: http://networkeverts.tumblr.com.
1. See Shirky (195–96), who echoes Gibson on the term cyberspace and its fading. In a Twitter exchange on November 27, 2011, @scottdot asked, “Who the hell says ‘cyber’-anything anymore?” and Gibson himself responded: “I have said that myself, many times.”
2. By “the network” I deliberately refer to the popular, imprecise notion that combines the World Wide Web and the Internet with interoperating networks, such as cellular data networks and GPS satellites. My subject is the collective cultural imagination of the network in this sense, though with an eye to more precise technological realities.
3. Influential works include Schreibman, Siemens, and Unsworth, A Companion to Digital Humanities; Kirschenbaum, “What Is Digital Humanities,” 3–7, and Kirschenbaum, “Digital Humanities As/Is a Tactical Term,” 417–21; and Svensson, “Humanities Computing as Digital Humanities.”
4. On the term “thick mapping,” a valuable overview, and a useful portfolio of specific projects and approaches, see Burdick et al. See also Ramsay and Rockwell; and Presner, Shepard, and Kawano.
5. The famous description of cyberspace appears in William Gibson’s Neuromancer, 51. In a conversation with Timothy Leary in 1989 that was later edited for Mondo 2000, Gibson suggests that the cyberpunk protagonist of the novel, addicted to cyberspace, has an orgasmic epiphany at the end of the novel, a “transcendent experience” in which he recognizes the body, the “meat,” from which he has been estranged “as being this infinite complex thing.” Intriguingly, Gibson and Leary were discussing the development of a video game based on Neuromancer. Sirius, “Gibson and Leary Audio (Mondo 2000 History Project).”
6. Wiener, “Men, Machines, and the World About.” Vernor Vinge’s novella, True Names, had imagined an immersive 3D virtual world before Gibson, which Vinge tellingly called the “Other Plane.” Significantly, it was imagined as a gamespace, in terms of how its most adept users experienced it. A more capaciously imagined 3D virtual world returned a decade later in Neal Stephenson’s Snow Crash, which inspired the developers at Linden Lab to create Second Life in 2003. And Vinge’s more recent Rainbow’s End is set in a world of augmented reality.
7. On mobile technologies, see Gordon and de Souza é Silva, Net Locality: Why Location Matters in a Networked World; and Farman, Mobile Interface Theory.
8. William Pannapacker, “The MLA and the Digital Humanities,” Brainstorm (blog), Chronicle of Higher Education, December 28, 2009.
9. Roberto Busa, S. J., foreword to Schreibman, Siemens, Unsworth’s Companion to Digital Humanities, xvi.
10. Spime is Bruce Sterling’s term for a data-enhanced networked object (Shaping Things).
11. William Gibson interviewed by David Wallace-Wells in The Paris Review, 197 (Summer 2011): 107–49.
Bogost, Ian. “Beyond the Elbow-patched Playground.” Author’s blog, August 23, 2011. http://www.bogost.com/blog/beyond_the_elbow-patched_playg.shtml.
—. “Gamification Is Bullshit.” Author’s blog, August 8, 2011. http://www.bogost.com/blog/gamification_is_bullshit.shtml.
—. How to Do Things with Video Games. Minneapolis: University of Minnesota Press, 2011.
—. “Reality Is Alright.” Author’s blog, January 14, 2011. http://www.bogost.com/blog/reality_is_broken.shtml.
Burdick, Anne, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp. Digital Humanities. Cambridge, Mass.: MIT Press, 2012.
Dear, Michael, Jim Ketchum, Sarah Luria, and Doug Richardson, eds. Geohumanities: Art, History, Text at the Edge of Place. New York: Routledge, 2011.
Flanagan, Mary. Critical Play. Cambridge, Mass.: MIT Press, 2009.
Gibson, William. “Burning Chrome,” 1982; rept. Burning Chrome. New York: Ace Books, 1986.
—. Interview by David Wallace-Wells in The Paris Review 197 (Summer 2011): 107–49.
—. Neuromancer. New York: Ace Books, 1984.
—. Spook Country. New York: Putnam, 2007.
Gordon, Eric, and Adriana de Souza é Silva. Net Locality: Why Location Matters in a Networked World. Boston: Wiley-Blackwell, 2011.
Greenfield, Adam. Everyware: The Dawning Age of Ubiquitous Computing. Berkeley, Calif.: New Riders, 2006.
Hayles, N. Katherine. “Cybernetics.” In Critical Terms for Media Studies, ed. W. J. T. Mitchell and Mark B. N. Hansen, 145–56. Chicago: University of Chicago Press, 2010.
—. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999.
Heath, Dan, and Chip Heath. The Myth of the Garage and Other Minor Surprises. New York: Crown Business, 2011.
Jones, Steven E., and George K. Thiruvathukal. Codename Revolution: The Nintendo Wii Platform. Cambridge, Mass.: MIT Press, 2012.
Jurgenson, Nathan. “Digital Dualism versus Augmented Reality,” Cyborgology, February 24, 2011. http://thesocietypages.org/cyborgology/2011/02/24/digital-dualism-versus-augmented-reality/.
Juul, Jesper. A Casual Revolution: Reinventing Video Games and Their Players. Cambridge, Mass.: MIT Press, 2010.
Kaplan, David, and Eric Zimmerman. PLAY, 2010, video. https://www.youtube.com/watch?v=8nWlR_LmCGc.
Kirschenbaum, Matthew. “Digital Humanities As/Is a Tactical Term.” In Debates in Digital Humanities, ed. Matthew K. Gold, 417–21. Minneapolis: University of Minnesota Press, 2012.
—. Mechanisms: New Media and the Forensic Imagination. Cambridge, Mass.: MIT Press, 2008.
—. “What Is Digital Humanities and What’s It Doing in English Departments.” In Debates in Digital Humanities, ed. Matthew K. Gold, 3–11. Minneapolis: University of Minnesota Press, 2012.
McDonough, J., R. Olendorf, M. Kirschenbaum, K. Kraus, D. Reside, R. Donahue, A. Phelps, C. Egert, H. Lowood, and S. Rojo. Preserving Virtual Worlds Final Report, December 20, 2010. https://www.ideals.illinois.edu/handle/2142/17097.
McGonigal, Jane. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. New York: Penguin Press, 2011.
Montfort, Nick. “Continuous Paper: The Early Materiality and Workings of Electronic Literature.” MLA 2004, Philadelphia. http://nickm.com/writing/essays/continuous_paper_mla.html.
Moretti, Franco. Graphs, Maps, Trees: Abstract Models for a Literary History. London and New York: Verso, 2005.
Presner, Todd, David Shepard, and Yoh Kawano. HyperCities: Thick Mapping in the Digital Humanities (metaLABprojects). Cambridge, Mass.: Harvard University Press, 2014.
Ramsay, Stephen, and Geoffrey Rockwell. “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities.” In Debates in Digital Humanities, ed. Matthew K. Gold, 75–84. Minneapolis: University of Minnesota Press, 2012.
Salen, Katie, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. Cambridge, Mass.: MIT Press, 2004.
Schell, Jesse. “Design outside the Box.” Presentation at Design, Innovate, Communicate, Entertain, February 18, 2010. http://www.g4tv.com/videos/44277/DICE-2010-Design-Outside-the-Box-Presentation/.
Schreibman, Susan, Ray Siemens, and John Unsworth, eds. A Companion to Digital Humanities. New York: Wiley-Blackwell, 2004. http://www.digitalhumanities.org/companion/.
Shirky, Clay. Here Comes Everybody: The Power of Organizing without Organizations. New York: Penguin Press, 2008.
Sirius, R. U. “Gibson and Leary Audio (Mondo 2000 History Project).” Acceler8tor, December 23, 2011. http://www.acceler8or.com/2011/12/gibson-leary-audio-mondo-2000-history-project/.
Stephenson, Neal. Reamde. New York: William Morrow, 2011.
—. Snow Crash. New York: Bantam, 1992.
Sterling, Bruce. Shaping Things. Cambridge, Mass.: MIT Press, 2005.
Svensson, Patrik. “Humanities Computing as Digital Humanities.” DHQ 3.3 (Summer 2009). http://www.digitalhumanities.org/dhq/vol/3/3/000065/000065.html.
Vinge, Vernor. Rainbow’s End. New York: Tor, 2006.
—. “True Names.” In Binary Star #5, ed. James R. Frenkel. New York: Dell, 1981.
Weiser, Mark. “Ubiquitous Computing,” August 16, 1993. http://www.ubiq.com/hypertext/weiser/UbiCompHotTopics.html.
Wiener, Norbert. “Men, Machines, and the World About” (1954). In The New Media Reader, ed. Noah Wardrip-Fruin and Nick Montfort, 65–72. Cambridge, Mass.: MIT Press, 2003.