Skip to main content

Debates in the Digital Humanities 2023: Chapter 11

Debates in the Digital Humanities 2023
Chapter 11
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeDebates in the Digital Humanities 2023
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Introduction: The Digital Humanities, Moment to Moment by Matthew K. Gold and Lauren F. Klein
  6. Part I. Openings and Interventions
    1. 1. Toward a Political Economy of Digital Humanities by Matthew N. Hannah
    2. 2. All the Work You Do Not See: Labor, Digitizers, and the Foundations of Digital Humanities by Astrid J. Smith and Bridget Whearty
    3. 3. Right-to-Left (RTL) Text: Digital Humanists Plus Half a Billion Users by Masoud Ghorbaninejad, Nathan P. Gibson, and David Joseph Wrisley
    4. 4. Relation-Oriented AI: Why Indigenous Protocols Matter for the Digital Humanities by Michelle Lee Brown, Hēmi Whaanga, and Jason Edward Lewis
    5. 5. A U.S. Latinx Digital Humanities Manifesto by Gabriela Baeza Ventura, María Eugenia Cotera, Linda García Merchant, Lorena Gauthereau, and Carolina Villarroel
  7. Part II. Theories and Approaches
    1. 6. The Body Is Not (Only) a Metaphor: Rethinking Embodiment in DH by Harmony Bench and Kate Elswit
    2. 7. The Queer Gap in Cultural Analytics by Kent K. Chang
    3. 8. The Feminist Data Manifest-NO: An Introduction and Four Reflections by Tonia Sutherland, Marika Cifor, T. L. Cowan, Jas Rault, and Patricia Garcia
    4. 9. Black Is Not the Absence of Light: Restoring Black Visibility and Liberation to Digital Humanities by Nishani Frazier, Christy Hyman, and Hilary N. Green
    5. 10. Digital Humanities in the Deepfake Era by Abraham Gibson
    6. 11. Operationalizing Surveillance Studies in the Digital Humanities by Christina Boyles, Andrew Boyles Petersen, and Arun Jacob
  8. Part III. Disciplines and Institutions
    1. 12. A Voice Interrupts: Digital Humanities as a Tool to Hear Black Life by Alison Martin
    2. 13. Addressing an Emergency: The “Pragmatic Tilt” Required of Scholarship, Data, and Design by the Climate Crisis by Jo Guldi
    3. 14. Digital Art History as Disciplinary Practice by Emily Pugh
    4. 15. Building and Sustaining Africana Digital Humanities at HBCUs by Rico Devara Chapman
    5. 16. A Call to Research Action: Transnational Solidarity for Digital Humanists by Olivia Quintanilla and Jeanelle Horcasitas
    6. 17. Game Studies, Endgame? by Anastasia Salter and Mel Stanfill
  9. Part IV. Pedagogies and Practices
    1. 18. The Challenges and Possibilities of Social Media Data: New Directions in Literary Studies and the Digital Humanities by Melanie Walsh
    2. 19. Language Is Not a Default Setting: Countering DH’s English Problem by Quinn Dombrowski and Patrick J. Burns
    3. 20. Librarians’ Illegible Labor: Toward a Documentary Practice of Digital Humanities by Spencer D. C. Keralis, Rafia Mirza, and Maura Seale
    4. 21. Reframing the Conversation: Digital Humanists, Disabilities, and Accessibility by Megan R. Brett, Jessica Marie Otis, and Mills Kelly
    5. 22. From Precedents to Collective Action: Realities and Recommendations for Digital Dissertations in History by Zoe LeBlanc, Celeste Tường Vy Sharpe, and Jeri Wieringa
    6. 23. Critique Is the Steam: Reorienting Critical Digital Humanities across Disciplines by James Malazita
  10. Part V. Forum: #UnsilencedPast by Kaiama L. Glover
    1. 24. Being Undisciplined: Black Womanhood in Digital Spaces, a conversation with Marlene L. Daut and Annette K. Joseph-Gabriel
    2. 25. How This Helps Us Get Free: Telling Black Stories through Technology, a conversation with Kim Gallon and Marisa Parham
    3. 26. “Blackness” in France: Taking Up Mediatized Space, a conversation with Maboula Soumahoro and Mame-Fatou Niang
    4. 27. The Power to Create: Building Alternative (Digital) Worlds, a conversation with Martha S. Jones and Jessica Marie Johnson
  11. Acknowledgments
  12. Figure Descriptions
  13. Contributors

Chapter 11

Operationalizing Surveillance Studies in the Digital Humanities

Christina Boyles, Andrew Boyles Petersen, and Arun Jacob

Recent public conversations in the United States about the disproportionate police violence experienced by Black people, the spread of misinformation on social media platforms like Facebook and by bad actors like QAnon, and the increased reliance on third-party surveillance devices all demonstrate how physical and digital surveillance systems can be used to cause harm and weaken the fabric of democratic society (Shere and Nurse; Biddle; Ong; Marczak et al.). The brokenness of these systems signals a problem worthy of attention—especially by those in the digital humanities. If we as digital humanists are serious about our commitment to social justice—especially our opposition to discriminatory tools and methodologies—then we must scrutinize our field’s investment in these surveillance technologies and the ways in which they make us and our collaborators vulnerable to undue monitoring and weaponization.

We use the word weaponization to underscore how surveillance systems can exacerbate the violence that results from structural oppression. While marginalized groups—Black and Indigenous communities, women, 2SLGBTQIA+ people, and others—have long experienced the effects of surveillance (Browne; Eubanks; Noble), the field of surveillance studies is relatively new and, like digital humanities, has many origins and influences. Most significantly, the field is responsible for conceptualizing surveillance as encompassing the range of material and digital ecosystems used to track, trace, and monitor human behavior.1 Within this framework, surveillance systems ranging from data brokers and facial recognition software to police and neighborhood watch groups have been exposed as tools of oppression, exacerbating existing inequalities and systematically inscribing biases into technologies (Benjamin). While there has been increasing attention paid to the role of surveillance in the academy, especially as instruction has moved online as a result of the pandemic, those in the digital humanities have not yet fully reckoned with how surveillance systems touch their work.

As users, educators, promoters, and distributors of digital tools, digital humanists are inherently tied to the surveillance ecosystem. Even in developing and sharing open-source tools—an oft-promoted method for avoiding commercial offerings anchored in surveillance—links to surveillance systems regularly occur through reliance on third-party supporting technologies, such as website analytics through Google Analytics, code base storage in Microsoft’s GitHub, and other remote scripts and stylesheets. As such, we must be particularly aware of how the field is being subsumed by surveillance mechanisms and how we are foisting these same tools onto our colleagues and communities. For example, many digital humanities projects rely on commercial tools such as Amazon Web Services (AWS), Microsoft Azure, ArcGIS, GitHub, Thingiverse, Meta (formerly Facebook), Tableau, Adobe, and more. These tools make higher education particularly vulnerable to the digital surveillance apparatus as the corporations that make them often encourage scholars, researchers, and educators to extend that apparatus to their collaborators, colleagues, and communities. Even tools that appear to be removed from the surveillance machine—like GitHub, which was purchased by Microsoft in 2018 while retaining the look and feel of a noncommercial product—can place digital humanists, their collaborators, and their communities at risk.

Digital humanists risk further contributing to corporate surveillance by creating digital tools that rely on these risky third-party software systems and by using these and other surveillance technologies in their research and pedagogy. At the same time, digital humanists are uniquely poised to intervene in harmful practices of surveillance because they are often trained to think about the historical, social, political, and cultural context of digital technology as well as its design and implementation. Existing critical approaches in the digital humanities that engage these concerns, such as postcolonial digital humanities, anti-colonial digital humanities, Ethical EdTech, and minimal computing, provide models for how DH might address the ethics and risks posed by surveillance technologies to the field and the academy writ large (Risam; Boulay et al.; Minimal Computing). This chapter offers one such approach by examining how a range of surveillance media intersect with the practice of digital humanities scholarship.

We understand our method as “operationalizing” surveillance studies for the digital humanities, or making visible the often-invisible underpinnings of surveillance within our field, demonstrating how our tools can and do produce harm. In what follows, we first turn to the field of geography for an example of the insidious history of surveillance technology in academia, before unpacking the problematic legacies of some of the technologies that are used in digital humanities scholarship today. In so doing, we lay bare how these tools extend the reach of the surveillance machine, posing significant risks to our work and our communities. We close by acknowledging that while digital humanists are inextricably linked to systems of surveillance, they also have tremendous potential to envision and develop more ethical solutions. We invite you to join us in this pursuit.

Academic Surveillance in/of Mapping Technologies

One strong example of the dangers of surveillance technologies in academia comes from the field of geography. Mapping and geographic information system (GIS) technologies are power-laden forms of knowledge production that help to (re)create as much of the world as they represent through techno-scientific, modernist, and colonial discourses. As those in the field of geography well know, maps both enable and suppress knowledge through the intentional and unintentional elements of the ideological practice of cartography. By contrast, the tools and technologies that we learn and use in DH are often given more attention than the historical and material conditions under which those tools and technologies were developed or first employed. By looking at the corpus of GIS discourse that interrogates the social roots, histories, and implications of GIS technology, we can see a telling example of how a scholarly field has grappled with the implications of its own technologies and methods (Jacob, “Follow the Ho Chi Minh Trail”). More specifically, we learn how the field of geography was co-opted by military and political interests—erasing its engagement with critical social discourse.

GIS endures as an example of the model postwar science, one that blurs the boundaries between theory and praxis, science and engineering, civilian and military uses, classified and unclassified systems, and that contributes both to economic prosperity and national security. GIS’s ability to cross over into other disciplines as well as bear fruit in the marketplace meant that the field was able to achieve more interest and uptake from the commercial sector. The more accurate and precise contouring of the commercial terrain that resulted, in turn, had additional use-value for the military establishment.2

After World War II, the military was actively engaged in formalizing an instrumental model of geography education that folded the tools and practices deployed in the field of combat into the classroom. This period can be best understood as a melting pot. The military, industry, and academia were enmeshing themselves in a triple helix where ideas, technologies, and techniques freely circulated between the three domains. The key figures involved in the establishment of new governmental bodies maintained their academic affiliations and managed the growth of these domains as interlocutors and power brokers. With the creation of governmental bodies such as the Office of Strategic Research and Development (OSRD) and the Office of Strategic Services (OSS), research in the sciences, social sciences, and military were sutured into a single outcome (Barnes and Farish).

The information architectures that were being developed out of the field of geography were thus primed to serve the military cause. The techniques and technologies that were developed in the Cold War period were directly funded by the Department of Defense in collaboration with various industry partners and academic institutions (Clarke and Cloud). In fact, the Cold War–era cartographic systems were only made possible as a result of the stolen German archive of maps from World War II. The U.S. Army Geodesist Major Floyd W. Hough and his unit, the HOUGHTEAM, acquired the Nazi archive of geodesic maps from various German technical universities, government institutes, and libraries (G. Miller). These geodesic datasets served as the bedrock of most cartographic systems that were used to engage in the surveillance of the USSR during the Cold War. The HOUGHTEAM was also able to expatriate Nazi officers of the Reichsamt für Landesaufnahme (RfL), computational staff, and geodesic scientists and engineers who worked on the German maps. Once they were in the United States, they continued their work for the U.S. Army Map Services (Hough). In this sense, the HOUGHTEAM’s subsequent work on the European Datum, ED50, which serves as the undergirding of the global coordinate system known as the Universal Transverse Mercator (UTM), can be seen for how it was built on work done by and with the Nazis.

This nefarious connection does not end with the Cold War: These same German maps inform the media architecture for geofencing. A geofence is a virtual perimeter drawn around a real-world geographic area that, when crossed, triggers an action of some kind. Geofencing is used by marketing campaigns to deliver context-appropriate content to consumers, and it is also often deployed by special-interest groups to target vulnerable populations. Geofencing campaigns have been implemented in spaces such as Aboriginal health centers, abortion clinics, addiction treatment centers, courthouses, election polling stations, immigrant and refugee resettlement centers, halfway houses, hospitals, homeless shelters, needle exchange clinics, schools, sexual health clinics, and women’s shelters.3 These campaigns frequently engage in the trafficking of pseudoscience, debunked studies, sensationalized imagery, or other fear-mongering about medical procedures, health care policy, refugee resettlement, and the like, thereby willfully promoting and inciting hatred against targeted groups. The trajectory that connects the militarization of the field of geography to the later use of its technologies by geofencers demonstrates how academic fields can become vectors for surveillance in intended and unintended ways.

Academic Surveillance in/of Digital Humanities

The field of digital humanities is not far removed from this cautionary tale. The oft-touted digital humanities origin story of Roberto Busa fails to acknowledge the provenance of punch card technology and the purpose for which it was designed (Jacob, “Punching Holes”). Although the punch card technology was used for processing and tabulating data in census-taking operations since the 1890s, it was innovated on, instrumentalized, and weaponized to execute the race science and surveillance agenda of the Third Reich (Pugh). If we are to acknowledge the IBM punch card technology as an essential antecedent of the digital humanities, then we must be cognizant of its problematic history and remain vigilant in defending against future harms.

Echoes of IBM punch card technology already are extending into the digital mediascape. Just as the original punch cards weaponized information about vulnerable communities in order to automate human extermination, facial recognition software developed by Amazon is being used in Immigration and Customs Enforcement (ICE) detention camps (Ho). Ethical concerns regarding the data security and retention of facial image databases are manifold. More importantly, facial recognition software disproportionately fails on—yet is overwhelmingly used against—people of color and other marginalized groups (Campbell, Chandler and Jones; Algorithmic Justice League; Buolamwini and Gebru). From some of its earliest applications, facial recognition software has been heavily deployed against marginalized communities because of its use in welfare programs and border security and within the prison system.

In the digital humanities, facial recognition and computer vision software has largely been used to analyze archival materials—projects that would seem far removed from the high-stakes applications just described. One of the earliest digital humanities projects to apply facial recognition software, University of California Riverside’s 2012 FACES: Faces, Art, and Computerized Evaluation Systems, sought to identify individuals in historic portrait art (B. Miller). Other projects, such as Envisaging the Holy Land: Facial Recognition and Early Photography, have sought to “bring to life the otherwise lost individual faces” (Eckstein). Using a Python script in conjunction with OpenCV—an open-source computer vision library—the Judaica DH lab extracted faces from their collections of historic photos. A similar model was used in pulling portraits from the National Archives of Australia in an effort to “liberate the lives of those who suffered under the restrictions of the White Australia Policy” (Sherratt).

Each of these projects raises ethical questions about the use of facial recognition software—namely, issues of consent. Using computer vision to showcase the individuals most negatively affected by Australia’s “White Australia” policy raises compelling questions and helps bring to light Australia’s racist and exclusionary practices put in place during the twentieth century. While these topics are certainly worthy of scholarly attention, there are ways this project can more actively engage with questions of ethics and privacy. The portraits used for the Real Face of White Australia project were “extracted from a range of government documents using a facial detection script” (Bagnall and Sherratt). Individuals portrayed were required to have their photograph taken in order to temporarily leave Australia and to be allowed reentry. Although these individuals consented to having their photo taken—albeit, at times, providing a forced or coerced form of consent—they and their heirs did not consent to having their images added to a public database or having their images repurposed for other forms of public consumption.

Scholars have developed myriad approaches to working with facial images and computer vision. Some projects—like TwitLit—outline the risks of surveillance technologies on their sites, acknowledging the limitations of their project structure or chosen tool. Building on this model, projects like COVID Black’s “Homegoing” document their use of facial images and provide opt-out options for family members of the deceased. Other projects have adopted postcustodial or noncustodial models in which individuals retain full control over their image or data. Within this framework, an individual retains the rights to their own image while institutions (e.g., universities, libraries, museums, project teams) receive revocable licensing rights. Projects like the South Asian American Digital Archive (SAADA), the Guatemalan National Police Historical Archive, the Measure the Future Library Space Study, the Archivo de Respuestas Emergencias de Puerto Rico (AREPR), and ImaginX en Movimiento are examples of successful projects using this model of risk statements, opt-out options, and postcustodial/noncustodial approaches that make visible the surveillance mechanisms underlying digital projects. Moreover, opt-out options and postcustodial/noncustodial models reject surveillance systems by allowing participants to choose when and how their information is shared.

Pushing back against the use of existing facial recognition technologies, some digital humanities scholars, including Christop Musik and Matthias Zeppelzauer, recommend avoiding existing computer vision programs and techniques in DH, as the ground truths these algorithms are built on are often insufficient for higher-level DH research questions. Instead, Musik and Zeppelzauer recommend a dynamic “active learning” approach to training computer vision algorithms in DH, in which the researcher takes a more active role in training the algorithm, where “her/his needs are directly integrated into the learning process, which replaces the need to define a ground truth explicitly” (59). Others, like Jentery Sayers, question using facial recognition software at all, stating: “While I know digital humanities is often quick to build alternative technologies, amidst these questions is whether computer vision should be used at all for face recognition in and beyond academic work.” In part, current computer vision systems encounter such failures because of the datasets they are trained on and the explicit or inherent biases of the scientists and researchers creating them. Algorithmic training datasets ascribe the values and opinions of development teams, teaching the algorithm a specific way to view gender or race. Justifying their training selections, some development teams have relied on views harkening to craniometry, phrenology, or other debunked pseudosciences, claiming distinct racial faceprint differences in order to justify their theoretical viewpoints (Browne). Terming these built-in prejudices “high-tech racism,” Shoshana Magnet highlights the inability of developmentally flawed systems to serve with the mathematical impartiality we ascribe to algorithmic design.

Ecosystems of Surveillance in the Academy

Although digital humanists have pushed back against the use of racializing surveillance in facial recognition tools, the field is now seeing a broader threat emerge: the expanding application of surveillance in educational technologies used across college campuses. Commercial educational technology (EdTech) systems have been hailed as the solution to the twenty-first-century academy, providing students with individualized class experiences, easing faculty workloads, and granting administrators access to a vast wealth of student data. Many EdTech systems, such as D2L, TopHat, Blackboard, and Kaltura, can aggregate student data in bulk, with this data mined to create robust user profiles and for use in predictive learning analytics (Office of Educational Technology). With these commercial systems predominantly created and maintained outside the university, they raise significant concerns for the misuse of student data. Current government regulations, such as the Family Educational Rights and Privacy Act (FERPA), often still allow EdTech vendors to aggregate and use student data and metadata, with minimal transparency of how that data is being used and few protections for students looking to opt out and secure their data. Further government actions to regulate this data, including the U.S. Student Privacy Policy Office, the U.S. Department of Education’s Privacy Technical Assistance Center, and the California Consumer Privacy Act, take steps toward addressing these issues but currently fall short of the overall need.

Covid-19 is elevating higher education’s reliance on educational technologies, turning online education into the predominant mode of educational delivery. With increased demands on instructor time and university infrastructure, digital proctoring tools including Respondus, ProctorU, Proctorio, Examity, and Honorlock have seen exponential growth. Complaints about these programs abound, with the University of Texas at Dallas, California State University, and other institutions receiving student pushback and petitions that compare the programs to spyware and cite concerns over student data privacy (Kelley). Digital humanists also have taken up this call through the #AgainstSurveillance hashtag on Twitter, adding to support from the Electronic Frontier Foundation and other privacy advocates who have noted issues of usability, access, and equity. As digital humanists, we possess both the capability for humanistic inquiry and the literacy to assess the strengths and limitations of technologies; when combined, these skills equip us to advocate for ethical alternatives to institutional, corporate, and EdTech surveillance.

If we fail to intervene in harmful surveillance practices, we risk replicating and iterating on discriminatory systems through new technological innovations. Shea Swauger connects reports of how remote proctoring systems fail for students of color to a longer history of racism: “While racist technology calibrated for white skin isn’t new (everything from photography to soap dispensers do this), we see it deployed through face detection and facial recognition used by algorithmic proctoring systems.” As a result, students of color are disproportionately identified as being noncompliant or cheating—furthering institutional inequality and advancing the aims of racializing surveillance (Kelley). Reliance on these systems is only expected to increase as the Covid-19 pandemic continues and as work-from-home options become more prevalent.

Amid the larger specter of the academy’s investment in surveillance software, many digital humanists have been at the forefront of developing open-source alternatives or other student-friendly alternatives to both digital project tools and educational surveillance technologies. Unlike commercial EdTech systems, these alternatives have been largely recognized as being for the good of the university, providing compelling technological offerings for low or no cost. Ranging from scholarly communications platforms like Humanities Commons, CUNY’s Academic Commons, and Manifold Scholarship to digital humanities tools like Omeka and Mukurtu, these technologies have been developed by scholars and institutions, where they have been able to center openness, usability, and accessibility rather than profit. Mukurtu CMS is a particularly strong example, as it is designed to respect Indigenous knowledge systems and protocols, “empower[ing] communities to manage, share, and exchange their digital heritage in culturally relevant and ethically-minded ways” (“Our Mission”). In doing so, Mukurtu pushes back against racializing surveillance by centering Indigenous voices in the project’s design, development, and implementation, ensuring its platform remains beneficial to the communities by and for whom it was developed.

At the same time, there are many ways the digital humanities can reduce their reliance on surveillance infrastructure. Although secure and reliable in and of themselves, many digital humanities web tools and projects are supported by third-party offerings with questionable privacy standards. Mapping tools pulling from the Google Maps API, cloud computing through services like Microsoft Azure, web analytics support from Google Analytics and Tag Manager, facial recognition and machine-learning projects through Amazon Rekognition and SageMaker, and cloud storage offerings like Dropbox present a range of surveillance and privacy risks including personal data collection, user web tracking, and the sharing or leaking of data. Although the intentions and good efforts of these DH tool development teams may be to avoid the explicit surveillance prevalent in their commercial counterparts, they can still be engaged in surveillance systems through their dependence on commercial tools.

Alongside reliance on third-party offerings, many leading DH labs and tool providers have failed to implement basic encryption technology, with their sites lacking secure HTTPS connections and online communication that encrypts information sent between users and a website. This both safeguards user information and ensures integrity of the data being transferred. Within the DH community, popular resources such as Stanford’s Palladio, MIT HyperStudio (Chronos Timeline), ScholarsLab PRISM, Northwestern’s KnightLab TimelineJS/Soundcite/Storyline, University of Southern California’s Sophie, and the Text Encoding Initiative lack HTTPS connections.4 Although most of these tools were released many years ago, the web domains for each of these tools are active as of publication, many are exceptionally popular, and all were developed by large, well-funded Western institutions. Sustainability is often a challenge for digital projects, but it is imperative that we, as scholars at the intersections of computing and the humanities, ensure we are taking the small steps needed to safeguard our data and our communities. With the simple steps of properly configuring their host server, acquiring valid security keys, and avoiding predatory third-party platforms, tool providers and institutions can demonstrate a commitment to their users’ privacy and security.

By implementing simple changes such as these, DH tool builders and educators can ensure significantly more safety for their users, students, and fellow DH practitioners. We can build better tools, placing privacy and data security on par with usability and design. We can be more aware of the risks associated with many resources, weighing the worth of EdTech and DH tools before using them in our research projects and classrooms. We can push beyond current student and consumer privacy regulation, recognizing such legislation as the General Data Privacy Regulation (GDPR), the California Consumer Privacy Act, and FERPA as privacy baselines instead of ceilings. We can act with a collective and conscious effort, creating an ecosystem of safety and security within DH. Most importantly, we can resist and reject racializing surveillance by adopting practices that protect the most marginalized members of our communities.

Envisioning Anti-Surveillance Futures

As digital humanists, we must even more strongly prioritize issues pertaining to surveillance and privacy. As digital technologies continue to pervade our workplaces, our homes, and our social spheres, it is critical that the field interrogates the ways in which these tools are being deployed and monetized by others. Surveillance studies scholarship reveals that there are a number of stakeholders interested in our data, including governing bodies, corporations, data brokers, communities, and individuals. Their interest is expounded when they can collect the data of marginalized groups in order to increase their own profits. To protect our communities as well as our collaborators and participants in our projects from surveillance risks, we must develop the infrastructure to advise, create, and sustain transformative and effective privacy practices. A handful of projects have already begun this important work, and they serve as powerful models for the future of digital humanities work.

One powerful example is the Sovereign Bodies Institute, which collects data about Missing and Murdered Indigenous Women and Girls (MMIWG). The project is explicitly Indigenous—information about Native communities is collected and maintained by Indigenous scholars. The project director, Annita Lucchesi, notes that “for a long time, research has been used as a colonial tool. They’ve repeatedly come to our communities and said let us tell you about you. For us to say no, we know how to tell our stories. We know how to study and understand what’s happening to our people and to our bodies. There’s some sovereignty in that. It’s a reclamation of power.”

As part of the project design, the Sovereign Bodies Institute does not share information with colonial powers, including settler governments and law enforcement agencies. Luchessi states that “the FBI and the Canadian national government both asked us for the database. We spent several months traveling around Indian Country and asking folks what do you feel about this? Are you comfortable with this being shared? And the overwhelming majority of the answers were no.” By refusing to share this information with settler authorities, the Sovereign Bodies Institute is preventing its weaponization by colonial entities. By foregrounding the privacy of Indigenous peoples and their families, the Sovereign Bodies Institute prioritizes participant privacy, celebrates Indigenous knowledge systems, and rejects colonial notions of data ownership.

Like the Sovereign Bodies Institute, the Our Data Bodies project seeks to protect marginalized peoples from the misuse of their data. According to their website, the project coordinators are “telling the story of surveillance and data-based discrimination across the United States” (Lewis et al.). To do so, they are collecting stories from residents of Charlotte, Detroit, and Los Angeles to show “how different data systems impact re-entry, fair housing, public assistance, and community development.” By explicitly examining the effects of surveillance within urban centers, the Our Data Bodies project is bringing attention to the ways in which race, ethnicity, and socioeconomic status are tied to our notions of privacy.

Members of the digital humanities community already are giving voice to many of these concerns. One particularly powerful example also found in this edition of Debates in the Digital Humanities is the Feminist Data Manifest-NO, which “refuses harmful data regimes and commits to new data futures” (Cifor et al.). The Manifesto-NO makes thirty-two explicit refusals that push back against Western knowledge systems and their relationship with data and people. Like the authors of the Manifesto-NO, we as the authors of this chapter uphold Eve Tuck and K. Wayne Yang’s framework of refusal, which “provide[s] ways to negotiate how we as . . . researchers can learn from experiences of dispossessed peoples—often painful, but also wise, full of desire and dissent—without serving up pain stories on a silver platter for the settler colonial academy, which hungers so ravenously for them” (Tuck and Yang). Refusing can take many forms: focusing on structural inequalities rather than individual experiences, emphasizing failures to respond to injustices rather than the injustices themselves, and refusing to blindly reproduce the inequalities of the past.

The Manifest-NO—along with other engagements from the field of digital humanities—are conversing with feminist, intersectional, and Indigenous theories and knowledge systems in ways that are powerful and necessary. The Feminist Data Manifest-NO mobilizes refusal as its organizing principle to critique the harmful and extractive data practices that are in use in the mediascape and advocate how to critically imagine data futurities. Central to the Manifest-NO is its critical and ethical sensibility to read popular claims about data science and technology with a hermeneutic of suspicion. We see surveillance as inextricably linked to these arguments, and we build on the work of the Manifesto-NO by operationalizing surveillance studies in digital humanities. The field of digital humanities is uniquely poised to resist co-optation by surveillance technologies—we have the critical knowledge and technical know-how to push back against government and corporate encroachment, to assess the problems with our tools, and to produce new and transformative infrastructures. As such, we, the authors, offer the field our own refusals: We refuse the commercialization of our work. We refuse to sacrifice the privacy of our colleagues and communities for the sake of profit or promotion. We refuse to let our projects propagate systems of surveillance. Instead, we commit to envisioning new futures for the digital humanities—ones that imagine spaces for engagement outside the confines of the surveillance state.

Notes

  1. For more information, see Marx; see also Packer and Reeves.

    Return to note reference.

  2. For more information, see Barnes; Bousquet; Dalton; Farish; and Wilson.

    Return to note reference.

  3. For more information, see Calkin; see also Wray et al.

    Return to note reference.

  4. These tools can be found at http://hdlab.stanford.edu/palladio/; http://hyperstudio.mit.edu/software/chronos-timeline/; http://prism.scholarslab.org/users/sign_in; http://timeline.knightlab.com/; http://sophie2.org/trac/; and https://tei-c.org/.

    Return to note reference.

Bibliography

  1. Algorithmic Justice League. “What Is Facial Recognition Technology?” Accessed August 9, 2022, https://www.ajl.org/facial-recognition-technology.

  2. American Civil Liberties Union. “ICE and Border Patrol Abuses.” Accessed December 20, 2020, https://www.aclu.org/issues/immigrants-rights/ice-and-border-patrol-abuses.

  3. Bagnall, Kate, and Tim Sherratt. “The Real Face of White Australia: Living under the White Australia Policy.” 2010, https://www.realfaceofwhiteaustralia.net/.

  4. Barnes, Trevor J. “Geographical Intelligence: American Geographers and Research and Analysis in the Office of Strategic Services 1941–1945.” Journal of Historical Geography 32, no. 1 (January 2006): 149–68, https://doi.org/10.1016/j.jhg.2005.06.001.

  5. Barnes, Trevor J., and Matthew Farish. “Between Regions: Science, Militarism, and American Geography from World War to Cold War.” Annals of the Association of American Geographers 96, no. 4 (December 2006): 807–26, https://doi.org/10.1111/j.1467-8306.2006.00516.x.

  6. Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity, 2019.

  7. Biddle, Sam. “Police Surveilled George Floyd Protests with Help from Twitter-Affiliated Startup Dataminr.” The Intercept. July 9, 2020, https://theintercept.com/2020/07/09/twitter-dataminr-police-spy-surveillance-black-lives-matter-protests/.

  8. Boulay, Nadine, Ashley Caranto Morford, Arun Jacob, Kush Patel, and Kimberly O’Donnell. “Transforming DH Pedagogy.” Digital Studies/Le Champ Numerique 11, no. 1 (2020): 1–43, https://doi.org/10.16995/dscn.379.

  9. Bousquet, Antoine. “Cyberneticizing the American War Machine: Science and Computers in the Cold War.” Cold War History 8, no. 1 (February 2008): 77–102, https://doi.org/10.1080/14682740701791359.

  10. Browne, Simone. Dark Matters: On the Surveillance of Blackness. Durham, N.C.: Duke University Press, 2015.

  11. Buchroithner, Manfred F., and René Pfahlbusch. “Geodetic Grids in Authoritative Maps—New Findings about the Origin of the UTM Grid.” Cartography and Geographic Information Science 44, no. 3 (May 2017): 186–200, https://doi.org/10.1080/15230406.2015.1128851.

  12. Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Proceedings of Machine Learning Research 81, Fairness, Accountability, and Transparency, 77–91. Cambridge, Mass.: MLR Research Press, 2018, https://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline.

  13. Calkin, Sydney. “Towards a Political Geography of Abortion.” Political Geography 69 (March 2019): 22–29, https://doi.org/10.1016/j.polgeo.2018.11.006.

  14. Campbell, Zack, Caitlin L. Chandler, and Chris Jones. “Sci-Fi Surveillance: Europe’s Secretive Push into Biometric Technology.” The Guardian. December 10, 2020, https://www.theguardian.com/world/2020/dec/10/sci-fi-surveillance-europes-secretive-push-into-biometric-technology.

  15. Cifor, M., P. Garcia, T. L. Cowan, J. Rault, T. Sutherland, A. Chan, J. Rode, A. L. Hoffmann, N. Salehi, and L. Nakamura. Feminist Data Manifest-No. Accessed August 9, 2022, https://www.manifestno.com/.

  16. Clarke, Keith C., and John G. Cloud. “On the Origins of Analytical Cartography.” Cartography and Geographic Information Science 27, no. 3 (January. 2000): 195–204, https://doi.org/10.1559/152304000783547821.

  17. Cloud, John. “American Cartographic Transformations during the Cold War.” Cartography and Geographic Information Science 29, no. 3 (January 2002): 261–82, https://doi.org/10.1559/152304002782008422.

  18. Dalton, Craig M. “Sovereigns, Spooks, and Hackers: An Early History of Google Geo Services and Map Mashups.” Cartographica: The International Journal for Geographic Information and Geovisualization 48, no. 4 (December 2013): 261–74, https://doi.org/10.3138/carto.48.4.1621.

  19. Eckstein, Laura. “Envisaging the Holy Land: Facial Recognition and Early Photography.” Judaica DH at the Penn Libraries: Our Projects, https://judaicadh.github.io/work/envisaging-holy-land/.

  20. “Embrace, Extend, and Extinguish.” Wikipedia. Last modified April 27, 2022, https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish.

  21. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s, 2018.

  22. Farish, Matthew. “Canons and Wars: American Military Geography and the Limits of Disciplines.” Journal of Historical Geography 49 (July 2015): 39–48, https://doi.org/10.1016/j.jhg.2015.04.012.

  23. Ho, Karen, “Amazon Is the Invisible Backbone of ICE’s Immigration Crackdown.” Technology Review. October 22, 2018, https://www.technologyreview.com/2018/10/22/139639/amazon-is-the-invisible-backbone-behind-ices-immigration-crackdown/.

  24. Hough, Floyd W. “International Cooperation on a Geodetic Project.” Transactions, American Geophysical Union 32, no. 1 (1951): 106, https://doi.org/10.1029/TR032i001p00106.

  25. Jacob, Arun. “Follow the Ho Chi Minh Trail: Analyzing the Media History of the Electronic Battlefield.” IDEAH 2, no. 1 (July 2021), https://doi.org/10.21428/f1f23564.d9c905e8.

  26. Jacob, Arun. “Punching Holes in the International Busa Machine Narrative.” IDEAH 1, no. 1 (May 2020), https://doi.org/10.21428/f1f23564.d7d097c2.

  27. Kelley, Jason. “Students Are Pushing Back against Proctoring Surveillance Apps.” Electronic Frontier Foundation. September 25, 2020, https://www.eff.org/deeplinks/2020/09/students-are-pushing-back-against-proctoring-surveillance-apps.

  28. Lewis, Tamika, Tawana Petty, Mariella Saba, Seeta Peña Gangadharan, Kim M. Reynolds, and Virginia Eubanks. Our Data Bodies. Accessed August 9, 2022, https://www.odbproject.org/.

  29. Lucchesi, Annita. “The Sovereign Bodies Institute: Q&A with Executive Director Annita Lucchesi.” Native News. 2019, https://nativenews.jour.umt.edu/2019/sb-institute/.

  30. Magnet, Shoshana Amielle. When Biometrics Fail: Gender, Race, and the Technology of Identity, Durham, N.C.: Duke University Press, 2011.

  31. Marczak, Bill, John Scott-Railton, Siddharth Prakash Rao, Siena Anstis, and Ron Deibert. “Running in Circles: Uncovering the Clients of Cyberespionage Firm Circles.” Citizen Lab. December 1, 2020, https://citizenlab.ca/2020/12/running-in-circles-uncovering-the-clients-of-cyberespionage-firm-circles/.

  32. Marx, Gary. “Surveillance Studies.” In International Encyclopedia of the Social & Behavioral Sciences. 2nd ed., edited by Neil J. Smelser and Paul B. Baltes, 733–41. Amsterdam: Elsevier, 2015, https://doi.org/10.1016/B978-0-08-097086-8.64025-4.

  33. Mattern, Shannon. “Maintenance and Care,” Places Journal. November 2018, https://doi.org/10.22269/181120.

  34. “Microsoft Completes GitHub Acquisition.” Official Microsoft Blog. October 26, 2018, https://blogs.microsoft.com/blog/2018/10/26/microsoft-completes-github-acquisition/.

  35. Miller, Bettye. “Scholars to Apply Facial Recognition Software to Unidentified Portrait Subjects.” UCR Today. April 25, 2012, https://ucrtoday.ucr.edu/5453.

  36. Miller, Greg. “Behind Enemy Lines: The Untold Story of the Secret Mission to Seize Nazi Map Data.” Smithsonian 50, no. 7 (November 2019): 64–78, https://www.smithsonianmag.com/history/untold-story-secret-mission-seize-nazi-map-data-180973317/.

  37. “Minimal Computing.” Global Outlook::Digital Humanities. Accessed December 20, 2020, https://go-dh.github.io/mincomp/.

  38. Musik, Christoph, and Matthias Zeppelzauer. “Computer Vision and the Digital Humanities.” VIEW: Journal of European Television History and Culture 7, no. 14 (December 2018): 59–72, https://doi.org/10.18146/2213-0969.2018.jethc153.

  39. Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.

  40. Office of Educational Technology. “Learning Analytics.” Accessed August 9, 2022, https://tech.ed.gov/learning-analytics/.

  41. Ong, Kyler. “Ideological Convergence in the Extreme Right.” Counter Terrorist Trends and Analyses 12, no. 5 (2020): 1–7, https://www.jstor.org/stable/26954256.

  42. “Our Mission.” Mukurtu.org. Accessed December 20, 2020, https://mukurtu.org/about/.

  43. Packer, Jeremy, and Joshua Reeves. “Making Enemies with Media.” Communication and the Public 5, no. 1–2 (March 2020): 16–25, https://doi.org/10.1177/2057047320950635.

  44. Pugh, Emerson W. Building IBM: Shaping an Industry and Its Technology. Cambridge, Mass.: MIT Press, 1995.

  45. Rankin, William. After the Map: Cartography, Navigation, and the Transformation of Territory in the Twentieth Century. Chicago: University of Chicago Press, 2016.

  46. Risam, Roopika. New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy. Evanston, Ill.: Northwestern University Press, 2018.

  47. Saini, Angela. Superior: The Return of Race Science. Boston: Beacon Press, 2020.

  48. Sarwari, Khalida. “Could a Smart Device Catch Implicit Bias in the Workplace?” News@Northeastern. January 29, 2020, https://news.northeastern.edu/2020/01/29/how-about-a-smart-device-that-could-catch-implicit-bias-in-the-workplace/.

  49. Sayers, Jentery. “Computer Vision as a Public Act: On Digital Humanities and Algocracy.” Disrupting the Digital Humanities (blog). January 6, 2016, http://www.disruptingdh.com/computer-vision-as-a-public-act-on-digital-humanities-and-algocracy/.

  50. Shere, Anjuli R. K., and Jason Nurse. “Police Surveillance of Black Lives Matter Shows the Danger Technology Poses to Democracy.” The Conversation. November 14, 2020, https://theconversation.com/police-surveillance-of-black-lives-matter-shows-the-danger-technology-poses-to-democracy-142194.

  51. Sherratt, Tim. “The Real Face of White Australia.” Invisible Australians. September 21, 2011, http://discontents.com.au/the-real-face-of-white-australia/.

  52. Swauger, Shea. “Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education.” Hybrid Pedagogy. April 2, 2020, https://hybridpedagogy.org/our-bodies-encoded-algorithmic-test-proctoring-in-higher-education/.

  53. Teräs, Marko, Juha Suoranta, Hanna Teräs, and Mark Curcher. “Post-Covid-19 Education and Education Technology ‘Solutionism’: A Seller’s Market.” Postdigital Science and Education. July 2020, https://doi.org/10.1007/s42438-020-00164-x.

  54. Tobler, Waldo R. “Analytical Cartography.” The American Cartographer 3, no. 1 (January 1976): 21–31, https://doi.org/10.1559/152304076784080230.

  55. Tobler, Waldo R. “Automation and Cartography.” Geographical Review 49, no. 4 (1959): 526–34, https://doi.org/10.2307/212211.

  56. Tuck, Eve, and K. Wayne Yang. “Unbecoming Claims: Pedagogies of Refusal in Qualitative Research,” Qualitative Inquiry 20 (2014): 811–18, https://doi.org/10.1177/1077800414530265.

  57. 23&Me. Accessed December 20, 2020, https://www.23andme.com/.

  58. Warner, Deborah Jean. “Political Geodesy: The Army, the Air Force, and the World Geodetic System of 1960.” Annals of Science 59, no. 4 (January 2002): 363–89, https://doi.org/10.1080/0003790110044756.

  59. Wilson, Matthew W. New Lines: Critical GIS and the Trouble of the Map. Minneapolis: University of Minnesota Press, 2017.

  60. Winthrop-Young, G. “Drill and Distraction in the Yellow Submarine: On the Dominance of Warin Friedrich Kittler’s Media Theory.” Critical Inquiry 28, no. 4 (Summer 2002), https://doi.org/10.1086/341236.

  61. Tyler B. Wray, Ashley E. Pérez, Mark A. Celio, Daniel J. Carr, Alexander C. Adia, and Peter M. Monti. “Exploring the Use of Smartphone Geofencing to Study Characteristics of Alcohol Drinking Locations in High-Risk Gay and Bisexual Men.” Alcoholism: Clinical and Experimental Research. February 25, 2019, https://onlinelibrary.wiley.com/doi/10.1111/acer.13991.

Annotate

Next Chapter
Part III
PreviousNext
Royalties from the sale of this book will be donated by the editors to the Ricky Dawkins Jr Memorial Scholarship.

Copyright 2023 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org