Skip to main content

People, Practice, Power: 2. Reprogramming the Invisible Discipline: An Emancipatory Approach to Digital Technology through Higher Education

People, Practice, Power
2. Reprogramming the Invisible Discipline: An Emancipatory Approach to Digital Technology through Higher Education
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomePeople, Practice, Power
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Half Title Page
  3. Series Title Page
  4. Title Page
  5. Copyright Page
  6. Contents
  7. Introduction | Anne McGrail, Angel David Nieves, and Siobhan Senier
  8. Part I. Beyond the Digital Humanities Center: Historical Perspectives and New Models
    1. 1. Epistemic Infrastructure, the Instrumental Turn, and the Digital Humanities | James Malazita
    2. 2. Reprogramming the Invisible Discipline: An Emancipatory Approach to Digital Technology through Higher Education | Erin Rose Glass
    3. 3. What’s in a Name? | Taylor Arnold and Lauren Tilton
    4. 4. Laboratory: A New Space in Digital Humanities | Urszula Pawlicka-Deger
    5. 5. Zombies in the Library Stacks | Laura R. Braunstein and Michelle R. Warren
    6. 6. The Directory Paradox | Quinn Dombrowski
    7. 7. Custom-Built DH and Institutional Culture: The Case of Experimental Humanities | Maria Sachiko Cecire and Susan Merriam
    8. 8. Intersectionality and Infrastructure: Toward a Critical Digital Humanities | Christina Boyles
  9. Part II. Human Infrastructures: Labor Considerations and Communities of Practice
    1. 9. In Service of Pedagogy: A Colony in Crisis and the Digital Humanities Center | Kelsey Corlett-Rivera, Nathan H. Dize, Abby R. Broughton, and Brittany de Gail
    2. 10. A “No Tent” / No Center Model for Digital Work in the Humanities | Brennan Collins and Dylan Ruediger
    3. 11. After Autonomy: Digital Humanities Practices in Small Liberal Arts Colleges and Higher Education as Collaboration | Elizabeth Rodrigues and Rachel Schnepper
    4. 12. Epistemological Inclusion in the Digital Humanities: Expanded Infrastructure in Service-Oriented Universities and Community Organizations | Eduard Arriaga
    5. 13. Digital Infrastructures: People, Place, and Passion—a Case Study of San Diego State University | Pamella R. Lach and Jessica Pressman
    6. 14. Building a DIY Community of Practice | Ashley Sanders Garcia, Lydia Bello, Madelynn Dickerson, and Margaret Hogarth
    7. 15. More Than Respecting Medium Specificity: An Argument for Web-Based Portfolios for Promotion and Tenure | Jana Remy
    8. 16. Is Digital Humanities Adjuncting Infrastructurally Significant? | Kathi Inman Berens
  10. Part III. Pedagogy: Vulnerability, Collaboration, and Resilience
    1. 17. Access, Touch, and Human Infrastructures in Digital Pedagogy | Margaret Simon
    2. 18. Manifesto for Student-Driven Research and Learning | Chelsea Miya, Laura Gerlitz, Kaitlyn Grant, Maryse Ndilu Kiese, Mengchi Sun, and Christina Boyles
    3. 19. Centering First-Generation Students in the Digital Humanities | Jamila Moore Pewu and Anelise Hanson Shrout
    4. 20. Stewarding Place: Digital Humanities at the Regional Comprehensive University | Roopika Risam
    5. 21. Digital Humanities as Critical University Studies: Three Provocations | Matthew Applegate
  11. Figure Descriptions
  12. Contributors

PART I | Chapter 2

Reprogramming the Invisible Discipline*

An Emancipatory Approach to Digital Technology through Higher Education

Erin Rose Glass

Sleepwalking into surveillance capitalism, which is evolving into data and computation driven authoritarianism, one cool service at a time.

—Zeynep Tufekci (@zeynep) April 26, 2017

One of the things that built Apple II’s was schools buying Apple II.

—Steve Jobs

It is no secret that digital technologies are posing profound questions regarding the protection and advancement of human freedom in a digitally mediated world. In the last several years, a string of highly publicized scandals—such as Edward Snowden’s revelations of the global surveillance program in 2013, the fake news scandal of the 2016 U.S. presidential election, and the Facebook–Cambridge Analytica data scandal reported about in 2018—have tempered early enthusiasm for networked digital technology that was predicted to powerfully democratize knowledge, politics, and culture. Today, the democratizing rhetoric often employed by digital companies and boosters (such as manifest in Facebook’s mission to “give people the power to build community and bring the world closer together” and in Google’s mission to “organize the world’s information and make it universally accessible and useful”) sounds naive if not duplicitous given the way that digital tools (especially those provided by these companies) are used to exploit, surveil, manipulate, and deceive users on a global scale. However, as the treacherous aspects of these tools become increasingly apparent, it is less certain whether and how the academic, whose forms of knowledge production and dissemination are deeply dependent on digital practices, is obligated to respond to them.

In this chapter, I argue that the academic does in fact have the unique responsibility to fight the forms of surveillance and control that are increasingly imposed on individuals by corporate digital technologies. As part of the infrastructure that supports research and teaching, digital technologies are often adopted and evaluated by academics and institutions according to their practical value and professional or community norms. What I want to offer here is an analysis of digital technologies that instead focuses on the broader social and political realities that they reinforce, support, or create. This type of analysis may seem foreign to many academics because of the way higher education has long encouraged (or even taught, if you will) its members to passively accept digital technology within research and learning environments as predominantly natural, neutral, and inevitable. Although the university’s tendency to reinforce technological complacency may be accidental, its effects have been politically disastrous; in teaching students to passively accept black-boxed, company-controlled technologies for research and writing—a technological attitude that they carry into their professional and personal lives—we miss the opportunity to turn them into critical digital citizens capable of demanding and building user-governed digital technologies that might resist market-incentivized surveillance and control. Elsewhere, I have called this unfortunate tendency in higher education the university’s invisible discipline and have linked it to the mass helplessness we see in response to widely reported ethical infringements carried out by large digital technology companies. Here, I discuss what I see as current myths that continue to reinforce the presence of the invisible discipline within higher education and point to promising directions that academics can take to help “reprogram” this discipline toward a more critical and democratic culture of computing.

By academic, I mean anyone whose professional vocation is dedicated to preserving, producing, critiquing, and disseminating knowledge for the good of society regardless of that person’s particular discipline or title. My comments here are specifically directed to those—whether as faculty members, teaching assistants, librarians, or IT workers—who have influence on what technologies are used in support of education, but I hope the arguments are useful to all those who consider themselves invested in the stewardship of knowledge. The academic, of course, like many other users of digital technology, is a social and professional being subject to the same pressures that make social media, search engines, smartphones, collaborative editors, and other forms of digital technology appealing or necessary in their professional and personal activities. Inarguably, these tools have been invaluable for enabling more individuals to participate in and benefit from knowledge production and knowledge-making communities while expanding the variety of forms in which types of knowledge and academic discourse are produced (such as the blog post or the social media discussion). These tools have also provided critical support for progressive movements within the academy today such as collaborative pedagogy, public engagement, and coalition building for groups marginalized within or by the academy.

At the same time, however, the continued use of these tools in the face of these companies’ disregard for democratic principles (I subsequently describe this more fully) represents passivity on the part of the academic community in the shaping of our digital world. If we consider knowledge production and dissemination as a social good, then it seems we should consider the social effects of our academic technology practices as seriously as the intellectual products that they help us produce.

As it stands, the academy has been largely complacent in the formal and informal adoption of digital technologies from companies whose unethical practices, although perhaps less apparent a decade ago, are now regularly front-page international news. Although there is no study on academic perceptions of the politics of academic technology (which I would welcome from academic technology organizations such as the Coalition of Networked Information or EDUCAUSE), this complacency is readily observable in the continued and pervasive use of Google services (from Google Search to Google Docs), Twitter, and Facebook by academics (myself included) for research, teaching, and professional communications.

Such passivity may seem natural and inevitable given that a majority of academics are not in the business of technology making nor have the skills and resources to create or use alternative tools. However, what I will argue here is that in fact academics (even the most self-proclaimed “technophobes”) have an exciting, unique, and above all necessary role in steering our digital age toward a more democratic future. Furthermore, the digital humanities, as a field that possesses the technical experience, expertise, and community as well as diverse interdisciplinary and cross-professional connections, can play a powerful part in leading the way.

One of the first steps that we can take toward overcoming academic passivity toward digital technology is understanding the perceptions that are often used to justify it. I have observed three general perceptions of digital technology among members of the academy, which I call myths, that contribute to a passive acceptance of unethical forms of digital surveillance and control in academic infrastructure. In the following sections I discuss each myth as a means of advancing three general arguments regarding the academic’s relationship to digital technology. I first argue that political issues related to digital technology are serious and urgent. I then argue that these political issues are relevant to the academic’s vocation. Finally I argue (and hopefully show) that the academic has the power to meaningfully address these issues in his or her own academic practices.

Myth #1: There Is Nothing to Worry About

The first myth I would like to discuss sounds so contrary to the general tenor of the news these days that it may seem hardly worth discussing. However, when I asked the audience during a talk at the Design@Large series at UC San Diego in February 2018 if any of them thought concerns over the political issues of digital companies like Facebook and Google were overblown, a number of hands shot up. Granted, this was only one room of people, and many folks in the room also raised their hands when I asked if anyone did feel concerned about these issues. Nonetheless, I think it is important to directly address the apathy expressed by the first group, especially as it reflects a broader apathy toward these issues embodied in many of our institutional partnerships with these companies. In this apathetic category, I’d like to include not only academics who are genuinely unconcerned about digital surveillance and control but also academics who may in fact be alarmed by these issues but not enough to believe that changes in their personal or institutional technological practices are warranted. Regardless of the origin of this lack of concern, it is often accompanied by the assumption that law, economics, experts, or some assumed form of inevitable progress will sort out any concerning features of these companies in due time without the need for individuals to change their technical practices or make political demands. It is also often mistakenly assumed that the main criticism of these companies is their disregard for conventional privacy norms, which can be dismissed as an obsolescent social value (such as claimed by Facebook CEO Mark Zuckerberg in the interest of his company1) or as a personal good that individuals have the right to exchange for the use of digital services on the basis of their own judgment of the risks and benefits.

Privacy infringement, however, is only the tip of the iceberg of the social and political harms caused by the unchecked power of these digital companies. Framing ethical issues in digital technology solely in the context of privacy concerns distracts us from the deeper issue at hand, which is that the ways in which we think, learn, debate, share news and information, shop, socialize, date, and elect political leaders are being shaped by and for private interest largely outside of democratic processes and often at the expense of the public good. The dystopian byproducts of this arrangement are widely documented. Rebecca Mackinnon, Jose van Djick, Christian Fuchs, Zeynep Tufekci, and David Lyon demonstrate how digital technologies and platforms enable corporate and state actors to surveil or censor users or shutdown services to control political unrest.2 Tiziana Terranova, Trebor Scholz, and Fuchs show how these digital companies exploit the labor of their users, enabling them to grow ever more powerful while crowding out alternatives that might give users more democratic oversight of their digital tools.3 John Cheney-Lippold, Lawrence Lessig, and Alexander Galloway demonstrate how many of these platforms manipulate or control user behavior while Tarleton Gillespie has pointed to the way they can be used to algorithmically control the circulation of information.4 Jean Burgess and Ariadna Matamoros-Fernández argue that digital platforms such as Twitter, YouTube, and Tumblr inadequately suppress abusive activity, and Safiya Noble highlights the racial biases of Google Search and its contribution to reproducing racial inequality.5 There are many more examples of ways these companies violate democratic and ethical principles, but what is perhaps most important to note is that privacy violation is not the sole critical political danger of these digital companies. Privacy violation, or rather the surveillance of personal data that it entails, is merely one of the exploitative mechanisms by which digital companies increase their corporate power while diminishing the agency of their users.

An earnest consideration of these issues should convince even the most enthusiastic supporter of digital media that there is at least room to improve the policies and practices of the most popular and powerful digital media companies. However, even when we acknowledge the need for improvement, it might still be difficult to understand that the issues raised by these critics are reinforced or intensified by digital technology. After all, many of the ills listed here, such as racism, misinformation, behavioral manipulation, and surveillance, existed long before the invention of computing technologies. Nonetheless, as these critics demonstrate, many of these issues are uniquely exacerbated by digital technology. This is not to say that digital technology in and of itself is responsible for intensifying these issues but rather the dominant current forms of digital technology that are shaped by an underlying economic logic. Shoshana Zuboff has called this logic “surveillance capitalism,” which she describes as a form of capital accumulation in which personal data is collected to “predict and modify human behavior as a means to produce revenue and market control.”6 Under this logic, technology is often developed with an eye toward extracting the personal data of users. Maximum extraction, however, requires users to interact with tools as much as possible and not to abandon them for tools produced by competing companies. Thus, this economic logic incentivizes the production of digital technologies that covertly work to control users in ways that maximize user engagement, user data production, and ultimately, company profit.

Some might call this set of techniques good business strategy. I would like to suggest that these strategies can also be usefully understood as enacting forms of digital oppression. Although we might normally associate oppression with political realities that are far more visibly violent, degrading, and controlling of groups of people, Paulo Freire offered a definition of oppression that allows us to see how it can be carried out even in seemingly peaceful settings. For Freire, oppression was the systematic suppression of the right of individuals to collectively understand and transform their world.7 As an educator working with Brazil’s illiterate poor in the mid-twentieth century, Freire developed his concept of oppression to describe the way the ruling class prevented lower classes from comprehending and overcoming their domination. What is perhaps most surprising about Freire’s work on oppression, however, is his argument that education, typically conceived of as an instrument of self and social betterment, is often a critical site for carrying out oppression. In his 1968 book Pedagogy of the Oppressed, he described how the many seemingly civil activities of education are in fact techniques of domination. Although there is not space here to thoroughly discuss those activities, we should note how well his definition of oppression describes the political reality of digital technology operating under the logic of surveillance capitalism. Like the type of education he condemns, dominant forms of digital technology production are designed to keep individuals uninformed and passive. If we accept Freire’s definition of oppression, then we begin to see that the problem with digital technology is not simply the sum of the many political and social issues that have become more visible or more pressing with the rise of digital technology. Rather, the problem is that many of the decisions that shape powerful digital technologies are made without the oversight of the humans who are affected most. If the academic believes in democracy, then the academic should recognize that the digital status quo represents a serious threat to its development and survival.

Myth #2: The Politics of Digital Technology Are Irrelevant to the Vocation of the Academic

Even if we acknowledge that there are urgent problems related to the digital status quo, these problems may still seem ultimately beyond the concerns and capabilities of an academic and too abstract to have much consequence in one’s scholarly activities. From this perspective, academic knowledge is academic knowledge regardless of the tools used to produce and transmit it and thus tools should be chosen solely according to their practical value in supporting academic activities. Other concerns, such as digital surveillance and control, are better left to the technologists and politicians.

In many ways, this is a reasonable point of view. We should not blame the academics for being more interested in their subjects of expertise than in the tools that they use to investigate and teach those subjects. Knowledge would never get very far if it were consistently required to examine the technological conditions of its production before making its claims. However, too little attention to the technological conditions of knowledge production inhibits us from appreciating the extent to which digital technology already shapes the form, reach, and argumentative structures of our academic practice. As Johanna Drucker and Patrik Svensson observe, everyday academic technologies such as word processing and presentation software “imprint their format features on our thinking and predispose us to organize our thoughts and arguments in conformance with their structuring principles—often from the very beginning of a project’s conception.”8 Without critically assessing the influence of technology on academic practice, we leave both research and education vulnerable to technological logics that may be counter to our interests as both academics and citizens.

Some academics will protest that they are “not technical” and therefore these issues are outside the scope of their interest or capacity. What the self-proclaimed “Luddite” or “technophobe” academic fails to recognize, however, is that research and teaching—even in their more traditional forms—have been deeply shaped by digital technology for more than half a century. Those dusty, analogue books we so adore? The bountiful access to books across university libraries on which scholarly production depends is in no small part due to the fact that university libraries were among the very first sites of higher education to embrace the computer in the 1960s. The circulation, discoverability, and accessibility of massive numbers of books that may seem predigital in fact rely on innovative uses of the computer to automate aspects of library cataloguing work and to share books and catalogue information across library systems.9 Word processing is another example of a digital technology whose ubiquitous presence has become nearly invisible but was reported to have greatly changed the pace and practice of scholarly production by academics who made the switch from typewriters in the early and middle 1980s.10 Thus, even the most traditional forms of humanities research and teaching, such as the monograph or the research paper, have long been shaped by digital technology but often in ways that are no longer visible or have been largely forgotten.

How digital technology shapes the intellectual character of academic practice is an interesting research question that we have only begun to explore. One area in which the influence of digital infrastructure on academic practice is becoming increasingly clear is in the practice of student writing. A growing number of scholars have demonstrated how student writing can be positively enhanced when carried out on collaborative or public-facing digital platforms rather than learning management systems.11 However, what I am more interested in here is the way that the invisibility of this relationship within the academy helps foster a broader disregard for the politics of digital technology within the university. Use of digital technology for educational purposes doesn’t simply assist with a learning goal but also profoundly shapes students’ consumer habits, expectations, imagination, and capabilities in regards to digital technology in general. Given that many academics take digital infrastructure for granted, students are rarely taught to question the ways that digital technology shapes their intellectual work or the complex set of interests and ideologies that these technologies serve.

At the beginning of this chapter I described this phenomenon of taught technological passivity as the university’s invisible discipline. This style of adopting and promoting technology use in the university teaches students to become passive users with little expectation of collectively understanding or modifying the various digital technologies that mediate our world. Although it may be hard to imagine a world in which students are given the right to critically understand the technical processes facilitated by their software (such as surveillance in learning management systems and search engines) and the right to transform those technical processes according to their needs and values, many advocates and activists have argued for user rights and protections along these lines. For example, since its formation in the 1980s the Free Software Foundation has advocated for a form of software freedom that guarantees users the right to study, modify, and share software code. Many exciting and often voluntarily built free and open source software projects have been developed along these principles, such as the GNU/Linux operating system, the text editor Emacs, the anonymous browser Tor, and social networks such as GNU Social and Mastodon. The history of computing and educational technology includes examples of experts who advocated for the educational, intellectual, and cultural value of giving ordinary users more control over the design and function of their computing environments. For example, Ted Nelson’s 1974 manifesto Computer Lib presents a book-length argument on why “You Can and Must Understand Computers Now,” and Alan Kay’s 1972 proposal for the Dynabook highlighted the cognitive and practical benefits to be gained if students created and governed their computing environments. Today activist organizations advocate for greater user control and oversight over their computing and networked environments, including The Electronic Frontier Foundation, The Internet Defense League, Fight for the Future, Platform Co-op, and Unlike Us.

Although these ideas and organizations have all shone light on the possibility and value of more democratic forms of software production and oversight, they have not yet secured a genuine, popular, and sustainable alternative to surveillance capitalism technologies. Two challenges stand out as obstructing their progress: (1) insufficient financial support, and (2) the lack of mass adoption. Without capitalist business models designed to extract value from user activity, these initiatives have limited resources to develop competitive alternatives to capitalist digital media. And without mass adoption (which is in part a result of these limited resources), they are unable to offer interactive access to a networked population, one of the most valuable features of social platforms with massive user bases.

The challenges of cultivating these alternatives are significant. However, by investing its IT budgets and educational practices in ethical alternatives to surveillant and controlling forms of digital technology, the university could potentially play a powerful role in supporting more ethical digital practices and services. The university, after all, has been a long-term partner in developing a market for technology companies and acculturating student populations to certain products and brands. In the early days of computer science, companies such as IBM provided massive discounts to universities for use in computing courses with the hope of training the next generation of computer scientists on their systems.12 With a broader focus, Apple pursued initiatives, partnerships, and political lobbying to bring its computers into every educational institution in the country with a vision of exposing all students to its products, of which the formation of the Apple Education Foundation in 1979 is one early example.13 As Steve Jobs remarked in an interview, “One of the things that built Apple II’s was schools buying Apple II.”14 In 1998, David Noble observed these commercial activities as part of a transformation of education into a market estimated to be around several hundred billion dollars, replacing health care as the focal industry in which companies would aim to sell their wares.15

Today, the importance of the educational market for digital technology companies remains as strong as ever, including more recent companies such as Google and Amazon and more recent products like e-textbooks, email services, and cloud storage. For example, Apple CEO Tim Cook credited students for a 21 percent increase in Mac sales in 2014.16 In a 2007 article for Inside Higher Ed, Andy Guess observed that Microsoft and Google were providing free email services to universities with the hope that “they’ll have won users for life.”17 Jeff Keltner, Google’s enterprise specialist for collaboration products, is quoted in the article as saying, “We think students are going to take these tools out to their personal lives, their professional lives.” In a similar vein, tech writer Brian Heater has observed that digital technology companies’ intense interest in education is not as altruistic as they might want it to seem. “Fostering an entire generation of first-time computer users with your software and device ecosystem,” he writes, “could mean developing lifelong loyalties, which is precisely why all this knock-down, drag-out fight won’t be drawing to a close any time soon.”18

The public-private partnerships between schools and technology companies reflect a broader trend of commercial activities carried out in and through schools (such as direct advertising, provision of corporate-sponsored educational materials and teacher training, and market research) that generate cash, equipment, or other types of assistance to private companies. Prior to 1983, only 17 percent of elementary and secondary schools in the United States were reported to have partnerships with private companies. By 2000, however, there were more than several hundred thousand partnerships between schools and businesses, contributing an estimated $2.4 billion in aid to schools.19 Public-private partnerships with information technology companies however are qualitatively different than partnerships with other companies that have typically participated in these relationships, such as those in the publishing or food and beverage industry. For one thing, as Juneau and Jaron Lanier note, information technologies are capable of locking users (in this case, the institutions and the faculty and students they serve) into certain digital services and infrastructure because of the complexity, inconvenience, and sometimes impossibility of transferring individual or institutional data and network relations from one digital service to another or changing an entire institution’s technological practice and habits.20 These technologies also generate valuable user data, often without full consideration by the user, for the purposes of capitalist accumulation. On one hand, these collection practices potentially leave users, including students, vulnerable to future discrimination as personal data is increasingly used to make predictions about individuals’ professional, criminal, consumer, or other types of behavior.21 However, even barring such potential discrimination, this collection nonetheless represents a violation of student privacy and autonomy in and of itself.

Although academics may consider data surveillance and other forms of digital oppression as irrelevant to the subject matter of their research or teaching, these forms of oppression are nonetheless often facilitated and normalized through the digital practices that they use for research and teaching. Just as thinkers within the tradition of critical pedagogy argue that standard practices of education work in subtle or surprising ways to reproduce racial, social, and class-based inequalities, we should recognize that academic practices have unwittingly come to play an important role in reproducing an oppressive digital status quo.22 Normalizing a passive acceptance toward software within the university reinforces a passive mentality toward software at large, reproduces the class division between technology users and technology makers, reifies an understanding of software as a neutral utility that need not and cannot involve the general user’s participation in design and governance, and inhibits the cultivation of skills and organization that would enable users to collectively understand and modify software according to their diverse needs and interests. Altogether, this culture of software use in the university reinforces a broader culture of software use driven solely by private interest rather than governed and shaped by the needs and interests of a community of users. Although the academic may not be aware of it, that person is often a key facilitator in this process.

Myth #3: The Typical Academic Cannot Meaningfully Respond to These Issues

But what can the academic do, constrained by professional demands and limited technical capabilities, to actually resist digital oppression? In an academic culture submerged in the corporate ethos of speed, a rejection of popular digital tools and the important time-saving, collaborative, and networking affordances they offer may feel equivalent to professional suicide.23 Additionally, some digital technologies are extraordinarily helpful for implementing progressive, collaborative, student-centered, and publicly engaged learning in the classroom as critical alternatives to institutional technologies that are often driven by management imperatives rather than educational principles.24 Outside of the classroom, social networking tools are also essential for growing activist or research networks across institutional boundaries or simply connecting with a supportive community during the many solitary and anxiety-ridden stages of an academic career. Even though certainly not all academics consult the “hivemind” on platforms like Twitter or Facebook, use Google Docs to cowrite articles and conference proposals, or turn to Gmail as a personal or overflow email service (if it does not already power their university email service), it is unlikely that even self-described academic Luddites avoid using the tools of surveillance capitalism—be it a smartphone, an operating system, or a web search engine—in any of their working days. Thoroughly rejecting digitally oppressive tools is not only counterproductive to progressive aims, but it may well be impossible.

It is also unclear which tools represent more ethical alternatives. Although Facebook and Google are currently getting the most critical press, the logic of surveillance capitalism touches nearly the entire ecosystem of digital technology. University-provided digital services are not as free from forms of surveillance as we may like to think, especially as they become increasingly supported through partnerships with Google, Microsoft, or Amazon Web Services. Furthermore, as Estee Beck et al., Audrey Watters, and Chris Gilliard have noted, forms of student surveillance have been thoroughly normalized by commercial learning management systems as a feature for instructors to keep track of students.25 Faculty, too, along with the entire academic community, have come to discover that their universities may be secretly spying on them, as well. In 2016, it came to light that University of California president Janet Napolitano ordered the installation of computer hardware to allow surveillance of all online activity across the UC system.26

Third-party tools that are provider-described as open are not necessarily any better. In a critique of what she calls “openwashing,” Audrey Watters cautions against trusting software options that the vendor describes as open but nonetheless are still guided too strongly by profit-seeking motives at the cost of educational values.27 Even tools that have gained credibility in ethically oriented user communities are not immune from sudden change in practice and vision. Microsoft’s acquisition of the version control system Github and Elsevier’s acquisition of the open-access publishing company bepress are just two examples of how corporate behemoths can easily co-opt tools that many had considered ethical alternatives. The fact of the matter is that rejecting tools that enforce digital oppression would take an extensive amount of expertise, commitment, and sacrifice, if it were possible at all. The academic, many would agree, is simply not the person needed or able to solve the problems of digital oppression.

Certainly, it will take more than academics alone to work toward a more ethical digital future. However, as members of an institution designed to educate society, academics have a unique opportunity to help foster a critical and engaged technological consciousness in students and society at large. Rather than teaching students to unquestioningly accept academic technologies, we might train them in the practices and values of community-governed software and encourage them to consider how they might continue to shape digital tools according to their needs and interests. If we wish our digital world to reflect democratic values then it is essential that we educate all our citizens to critically participate in its making.

Developing the type of critical and participatory consciousness appropriate for our age is no small task. However, it is precisely the sort of task for which the social institution of higher education is designed. Although the road may not be easy or straightforward, we need to think hard about how we as academic individuals and communities can do a better job in showing students that the politics of digital technology matter. We also need to give students experience in working together to overcome digital oppression and negotiating the technical, social, and institutional challenges that come with that process. As Chris Gilliard has written, “if higher education is to ‘save the web,’ we need to let students envision that something else is possible, and we need to enact those practices in classrooms.”28 The cost of our complacency is not simply our individual privacy but is also the training of a generation to helplessly accept digital oppression as if there is no other choice.

Toward Student-Governed Technology

How do we develop this new form of participatory technological consciousness in our academic work? There are many different steps we can take as individuals and as institutions, but I would like to first recount a personal project that I hope further articulates what I mean by participatory technological consciousness. In 2014, as a graduate student in the English department at the CUNY Graduate Center, I began codeveloping the networked writing platform Social Paper with Matthew Gold, Jennifer Stoops (then a graduate student in the Urban Education department), and the Commons in a Box development team. As budding scholars in the digital humanities, we were not only inspired to apply our humanities scholarly perspectives to the digital infrastructure that supported our educational practices but were also connected to a community of digital humanities practitioners who could help us develop new infrastructure on the basis of our insights. The objective of our project was to create a digital hub for networking student writing and feedback across classes, disciplines, institutions, and academic terms that would help cultivate genuine and sustained publics around student writing. Jennifer and I were inspired to build Social Paper because we had experienced firsthand the positive effects of networked student writing environments, such as the course blog, but recognized a number of ways in which networked environments might be improved in order to more fruitfully support cross-institutional and cross-disciplinary student writing communities. Networked student writing environments were often hosted in one-off virtual spaces (such as a blog) and rarely had the opportunity to grow into a reflexive community of writers that transcended the class or the academic term. These virtual spaces also often lacked granular permission settings, robust social functionalities, and a larger network of users that had played important roles in user growth and engagement in digital networks like Facebook and Twitter. Our hope was that an easy-to-use networked writing hub would encourage a greater number of students to experiment with networked writing practices while it allowed for a community of student writers to grow over time. I, personally, was excited to see whether the growth of such a community might transform the way that students thought about their assigned writing and whether they might begin to dream up new social and political purposes for their writing that transcended its use as a demonstration of a mandated learning goal.

I was also excited about the political potential of Social Paper. Although its first round of development involved a small team of developers, I was hopeful that Social Paper might one day turn into a writing platform on which students played a key role in developing its policies (such as those related to user data collection and code of conduct), its design, and its functionality. A community-controlled platform would give students the ability to analyze data for self or community study in ways typically available only to corporate administrative entities. In both these respects, it would provide an opportunity for students to critically understand and transform the digital medium of intellectual production in the spirit of Freire’s liberatory practice. Although these practices might seem unrelated to the objectives of student writing, I hoped they would provide the opportunity for students to better explore the extent to which our digital tools influence the way that we produce, circulate, read, and respond to writing. Students might begin to ask how educational technologies (such as Turnitin, Blackboard, Canvas, and Google Classroom) reflect certain ideologies about the purpose of student writing and in turn influence the character, economic realities, and social effects of that practice. A community-controlled platform would also offer the opportunity for students to experiment with its design and functionalities to see how different decisions in these areas might influence the intellectual and social character of their writing and intellectual exchange.

The idea for a student-run writing platform was partly inspired by the idea of a student newspaper. Just as the student newspaper has trained countless students in the role, methods, and complexities of one of the most important media institutions of the twentieth century, a student-governed digital platform would help train students in the challenges and opportunities of communication in a digital age. Not every student would need to be heavily invested in building the tool (that could be reserved for special student organizations, classes, and internships), but every student might have access to it and understand its mission. What if the university used these funds to build a community-governed platform for its students that might also be open to the public?

Such a utopian vision for educational technology may never come to pass. But even if a large-scale student-governed platform is unrealistic, there are still many opportunities for us to encourage and enable students to critically evaluate the technologies used to support their education and to push for opportunities to allow them to help shape and govern those technologies and their policies. At the very least, we should make sure to no longer allow forms of digital oppression to pass through the classroom invisibly. If we must use digital tools that are known to surveil or exploit user data for corporate or administrative use, we should make sure students are aware of these practices through the syllabus or class discussion. In this spirit, Autumm Caines and I published a reusable syllabus statement to help instructors alert students to these issues in the EDUCAUSE Review.29

There are also a number of exciting open source and community-driven tools that instructors can use in their teaching. While it is unlikely that a course would be able to entirely avoid forms of digital surveillance and control, incorporating alternatives in the classroom is a powerful way of showing students that different technical choices have different political, intellectual, and technical challenges and opportunities. One might simply try experimenting with a course blog as an alternative to learning management systems, such as many educators have written about in journals and on community sites like Journal of Interactive Technology and Pedagogy, Hybrid Pedagogy, Kairos, and HASTAC, to name just a few. Although blogs are not neutral technologies (there is no such thing) and cannot be considered entirely free from digital surveillance and control, they offer much greater forms of user privacy and user autonomy than many other forms of educational technology. Blogs often allow for more collaborative and public-facing forms of learning and also give the instructor and students more autonomy around the privacy settings and design of the site. For example, in his dissertation project “My Digital Footprint,”30 Gregory Donovan created a youth research group in which youth participants determined the privacy settings and design features on the blog space with which they communicated. Donovan’s research suggests that giving the youth participants greater control over their privacy settings helped them develop a greater critical awareness of digital privacy issues more broadly, and it should encourage us to incorporate similar practices in education at large.

Blogs, of course, come with their own sets of considerations. When setting up a blog, one has to consider questions such as how it will be hosted, who will fund the hosting, and who will have administrative control. Though there is not space enough here to discuss the many different possibilities and the various benefits and setbacks entailed in different types of blog setups, I’d like to make a few recommendations for individuals whose institutions do not provide blogging services. One exciting option is the Modern Language Association’s Humanities Commons, which provides free WordPress blogging tools and hosting for research and instructional use. On the MLA Humanities Commons, course blogs are not only free to host, but they are also embedded in a growing scholarly network that includes scholarly websites, group forums for discussing scholarly topics and building community, and an open access repository for hosting scholarly work. Use of the MLA Commons also represents a vote for community-driven academic software as opposed to for-profit academic platforms such as Academia.edu that have no user oversight.31 More ambitious instructors may consider setting up a digital commons using the free and open source software Commons in a Box (CBOX) that powers the MLA Humanities Commons. Although setting up a CBOX for one’s home institution involves a fair amount of work and often requires some form of institutional commitment, CBOX gives its administrators much more freedom to shape the space, functionality, and network as they like.32 Domain of One’s Own and Reclaimed Hosting are also great options for providing students and faculty with full control over their online environments, although they require institutional funding.

I have focused on blogging opportunities, but there are many other free digital tools and practices that can be experimented with in the spirit of digital liberation. The nonlinear publishing platform Scalar, the annotation platform Hypothes.is, and Wikipedia are just a few examples of these possibilities, and ideas for how to use these tools in teaching can be found in some of the journals and sites mentioned previously. What is important to note, however, is that digital liberation is not secured through the adoption of any one particular type of tool but rather through the fostering of critical technological consciousness in students. Any tool, as we have come to learn, can be co-opted for private interest. And even community-driven tools deserve our ongoing critical attention as a means to improve their capacity for serving our diverse educational goals, social and political values, and scholarly communities. Sometimes the most important questions to ask about technologies are social rather than technical. A critical understanding of our technological environments should also inspect the ways our organizations and cultural practices establish or impose technical norms on large communities. We should ask who gets to make these decisions, for what interests, and how might we make these decision-making processes more democratic.

It is my hope that the humanities, especially with the help of the community of digital humanities scholars, will recognize the extraordinary contributions that their educational and technological practices could make toward forging a more critical and participatory technological consciousness in students. History, philosophy, literature, the arts, and other humanities disciplines all have much to offer in consideration of the many issues arising from digital technology, and each discipline could in turn benefit from this technological engagement. Indeed, this sort of extra attention to the technologies used for educational purposes may feel difficult at times or irrelevant to the subject of study. But if the point of higher education is to prepare students to critically understand and act in our digitally mediated world, then it is nonetheless our duty.

Notes

I would like to thank panel organizer Jeff Alred and fellow panelists Lawrence Hanley and Jeremy Dean for the opportunity to present my then in-progress dissertation work in the form of this paper at the 2017 American Studies Association conference.

  1. Due to the COVID-19 pandemic, the use of educational technology has significantly expanded and intensified since the writing of this paper in 2017. Nonetheless, I believe that the emancipatory approach of student-governed technology sketched out here remains just as relevant, if not more urgent.

    Return to note reference.

  1. Johnson, “Privacy No Longer a Social Norm.”

    Return to note reference.

  2. MacKinnon, Consent of the Networked; Van Dijck, Culture of Connectivity; Fuchs et al., Internet and Surveillance; Tufekci, Twitter and Tear Gas; and Lyon, “Surveillance, Snowden, and Big Data.”

    Return to note reference.

  3. Terranova, “Free Labor”; Scholz, “Platform Cooperativism”; and Fuchs, Digital Labour and Karl Marx.

    Return to note reference.

  4. Cheney-Lippold, “A New Algorithmic Identity”; Lessig, “Code Is Law”; Galloway, “Protocol”; and Gillespie, “Relevance of Algorithms,” 167.

    Return to note reference.

  5. Burgess and Matamoros-Fernández, “Mapping Sociocultural Controversies”; and Noble, Algorithms of Oppression.

    Return to note reference.

  6. Zuboff, “Big Other,” 75.

    Return to note reference.

  7. Freire, Pedagogy of the Oppressed, 44–45.

    Return to note reference.

  8. Drucker and Svensson, “Why and How of Middleware.”

    Return to note reference.

  9. Salmon, “LITA’s First Twenty-Five Years,” 15.

    Return to note reference.

  10. Moran, “Electronic Media,” 113–15; and Case, “Processing Professorial Words.”

    Return to note reference.

  11. Savonick and Tagliaferri, “Building a Student-Centered (Digital) Learning Community”; and Davidson and Goldberg, Future of Learning Institutions.

    Return to note reference.

  12. Chopra and Dexter, Decoding Liberation.

    Return to note reference.

  13. Juneau, “Reflection on the History”; and Lundall, “On-Line Data-Base Revenues to Pass $1 Billion.”

    Return to note reference.

  14. Morrow, “Excerpts from an Oral History Interview.”

    Return to note reference.

  15. Noble, “Digital Diploma Mills.”

    Return to note reference.

  16. McCracken, “Apple Story.”

    Return to note reference.

  17. Guess, “When E-Mail Is Outsourced.”

    Return to note reference.

  18. Heater, “As Chromebook Sales Soar in Schools.”

    Return to note reference.

  19. Kowalski, “Public-Private Partnerships.”

    Return to note reference.

  20. Juneau, “Reflection on the History”; and Lanier, You Are Not a Gadget.

    Return to note reference.

  21. O’Neil, Weapons of Math Destruction.

    Return to note reference.

  22. hooks, Teaching to Transgress; and Shor, Critical Teaching.

    Return to note reference.

  23. Berg and Seeber, The Slow Professor.

    Return to note reference.

  24. Chu and Kennedy, “Using Online Collaborative Tools”; Stommel, “If bell hooks Made an LMS”; and Watters, “Beyond the LMS.”

    Return to note reference.

  25. Beck et al., “Writing in an Age of Surveillance”; Watters, “Ed-Tech in a Time of Trump”; and Gilliard, “Pedagogy and the Logic of Platforms,” 64–65.

    Return to note reference.

  26. Matier and Ross, “Cal Professors Fear UC Bosses.”

    Return to note reference.

  27. Watters, “From ‘Open’ to Justice# OpenCon2014.”

    Return to note reference.

  28. Gilliard, “Pedagogy and the Logic of Platforms,” 64–65.

    Return to note reference.

  29. Caines and Glass, “Education before Regulation.”

    Return to note reference.

  30. Donovan, “MyDigitalFootprint.ORG.”

    Return to note reference.

  31. For a brief overview of the MLA Humanities Commons and its values, see an interview with its former director Kathleen Fitzpatrick in “Humanities Commons: Networking the Humanities through Open Access, Open Source and Not-for-Profit,” in Scholarly Kitchen.

    Return to note reference.

  32. For instance, KNIT, the digital commons I direct at UC San Diego, has been opened to students and faculty at neighboring universities and community colleges to encourage more forms of cross-institutional collaboration and public engagement.

    Return to note reference.

Bibliography

  1. Beck, Estee, A. Crow, H. McKee, C. Reilly, J. deWinter, Stephanie Vie, Laura Gonzales, and D. Devoss. “Writing in an Age of Surveillance, Privacy, and Net Neutrality.” 2016. https://www.semanticscholar.org/paper/Writing-in-an-Age-of-Surveillance%2C-Privacy%2C-and-Net-Beck-Crow/607af395b5d1cfa615e72bf533e631a02a6acebd.

  2. Berg, Maggie, and Barbara K. Seeber. The Slow Professor. Toronto: University of Toronto Press, 2018.

  3. Burgess, Jean, and Ariadna Matamoros-Fernández. “Mapping Sociocultural Controversies across Digital Media Platforms: One Week of #gamergate on Twitter, YouTube, and Tumblr.” Communication Research and Practice 2, no. 1 (2016): 79–96.

  4. Caines, Autumm, and Erin Glass. “Education before Regulation: Empowering Students to Question Their Data Privacy.” Educause Review, October 13, 2019. https://er.educause.edu/articles/2019/10/education-before-regulation-empowering-students-to-question-their-data-privacy.

  5. Case, Donald. “Processing Professorial Words: Personal Computers and the Writing Habits of University Professors.” College Composition and Communication 36, no. 3 (1985): 317–22.

  6. Cheney-Lippold, John. “A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control.” Theory, Culture & Society 28, no. 6 (2011): 164–81.

  7. Chopra, Samir, and Scott D. Dexter. Decoding Liberation: The Promise of Free and Open Source Software. New York: Routledge, 2008.

  8. Chu, Samuel Kai-Wai, and David M. Kennedy. “Using Online Collaborative Tools for Groups to Co-construct Knowledge.” Online Information Review 35, no. 4 (2011): 581–97.

  9. Davidson, Cathy N., and David Theo Goldberg. The Future of Learning Institutions in a Digital Age. Cambridge, Mass.: MIT Press, 2009.

  10. Donovan, Gregory Thomas. “MyDigitalFootprint.ORG: Young People and the Proprietary Ecology of Everyday Data.” (Ph.D. dissertation. Graduate Center, City University of New York, 2013).

  11. Drucker, Johanna, and Patrik Svensson. “The Why and How of Middleware.” Digital Humanities Quarterly 10, no. 2 (2016). http://www.digitalhumanities.org/dhq/vol/10/2/000248/000248.html.

  12. Freire, Paolo. Pedagogy of the Oppressed. New York: Continuum, 1996.

  13. Fuchs, Christian. Digital Labour and Karl Marx. London: Routledge, 2014.

  14. Fuchs, Christian, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval. Internet and Surveillance: The Challenges of Web 2.0 and Social Media. New York: Routledge, 2013.

  15. Galloway, Alexander R. “Protocol.” Theory, Culture & Society 23, no. 2–3 (2006): 317–20.

  16. Gillespie, Tarleton. “The Relevance of Algorithms.” Media Technologies: Essays on Communication, Materiality, and Society 167 (2014): 167.

  17. Gilliard, Chris. “Pedagogy and the Logic of Platforms.” Educause Review 52, no. 4 (2017): 64–65.

  18. Guess, Andy. “When E-Mail Is Outsourced.” Inside Higher Ed, November 27, 2007. https://www.insidehighered.com/news/2007/11/27/when-e-mail-outsourced.

  19. Heater, Brian. “As Chromebook Sales Soar in Schools, Apple and Microsoft Fight Back.” TechCrunch, April 27, 2017. https://techcrunch.com/2017/04/27/as-chromebook-sales-soar-in-schools-apple-and-microsoft-fight-back/.

  20. hooks, bell. Teaching to Transgress: Education as the Practice of Freedom. New York: Routledge, 2014.

  21. Johnson, Bobbie. “Privacy No Longer a Social Norm, Says Facebook Founder.” The Guardian, January 11, 2010. https://www.theguardian.com/technology/2010/jan/11/facebook-privacy.

  22. Juneau, Karen R. “A Reflection on the History of Educational Technology and Evolving Pedagogies.” In Technology Integration and Foundations for Effective Leadership, edited by Shuyan Wang and Taralynn Hartsell. Hershey, Pa.: IGI Global, 2013.

  23. Kowalski, Theodore J. “Public-Private Partnerships, Civic Engagement, and School Reform.” Journal of Thought 45, no. 3–4 (2010): 71–93.

  24. Lanier, Jaron. You Are Not a Gadget: A Manifesto. New York: Vintage, 2010.

  25. Lessig, Lawrence. “Code Is Law.” The Industry Standard 18 (1999).

  26. Lundall, Allan. “On-Line Data-Base Revenues to Pass $1 Billion.” InfoWorld 3, no. 15 (July 27, 1981).

  27. Lyon, David. “Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique.” Big Data & Society 1, no. 2 (2014): 2053951714541861.

  28. MacKinnon, Rebecca. Consent of the Networked: The Worldwide Struggle for Internet Freedom. New York: Basic Books, 2013.

  29. Matier, Phil, and Andy Ross. “Cal Professors Fear UC Bosses Will Snoop on Them.” San Francisco Chronicle, January 29, 2016. www.sfchronicle.com/bayarea/matier-ross/article/Cal-professors-fear-UC-bosses-will-snoop-on-them-6794646.php.

  30. McCracken, Harry. “The Apple Story Is an Education Story: A Steve Jobs Triumph Missing from the Movie.” The 74, October 26, 2015. https://www.the74million.org/article/the-apple-story-is-an-education-story-a-steve-jobs-triumph-missing-from-the-movie/.

  31. Moran, Charles. “Electronic Media: Word Processing and the Teaching of Writing.” The English Journal 72, no. 3 (1983): 113–15.

  32. Morrow, Daniel. “Excerpts from an Oral History Interview with Steve Jobs.” Smithsonian Institution Oral and Video History, 1995. https://americanhistory.si.edu/comphist/sj1.html.

  33. Noble, David. Digital Diploma Mills: The Automation of Higher Education. New York: Monthly Review Press, 2003.

  34. Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.

  35. O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.

  36. Salmon, Stephen R. “LITA’s First Twenty-Five Years: A Brief History.” Information Technology and Libraries 12, no. 1 (1993): 15.

  37. Savonick, Danica, and Lisa Tagliaferri. “Building a Student-Centered (Digital) Learning Community with Undergraduates.” DHQ: Digital Humanities Quarterly 11, no. 3 (2017): 67–79.

  38. Scholz, Trebor. “Platform Cooperativism.” In Challenging the Corporate Sharing Economy. New York: Rosa Luxemburg Foundation, 2016.

  39. Shor, Ira. Critical Teaching and Everyday Life. Chicago: South End Press, 1980.

  40. Stommel, Jesse. “If bell hooks Made an LMS: Grades, Radical Openness and Domain of One’s Own.” Jesse Stommell, June 5. https://www.jessestommel.com/if-bell-hooks-made-an-lms-grades-radical-openness-and-domain-of-ones-own/.

  41. Terranova, Tiziana. “Free Labor: Producing Culture for the Digital Economy.” Social Text 18, no. 2 (2000): 33–58.

  42. Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, Conn.: Yale University Press, 2017.

  43. Van Dijck, José. The Culture of Connectivity: A Critical History of Social Media. New York: Oxford University Press, 2013.

  44. Watters, Audrey. “Beyond the LMS.” Hack Education, September 5, 2014. http://hackeducation.com/2014/09/05/beyond-the-lms-newcastle-university.

  45. Watters, Audrey. “Ed-Tech in a Time of Trump.” Hack Education, February 2, 2017. http://hackeducation.com/2017/02/02/ed-tech-and-trump.

  46. Watters, Audrey. “From ‘Open’ to Justice# OpenCon2014.” Hack Education, November 16, 2014. http://hackeducation.com/2014/11/16/from-open-to-justice.

  47. Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1 (2015): 75–89.

Annotate

Next Chapter
3. What’s in a Name?
PreviousNext
Copyright 2021 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org