Chapter 4
Relation-Oriented AI: Why Indigenous Protocols Matter for the Digital Humanities
Michelle Lee Brown, Hēmi Whaanga, and Jason Edward Lewis
Recent discussions around the ethical design and use of artificial intelligence (AI) treat AI systems and their materials and energy sources as discrete units. That is, it is as if the materials out of which they are made—and how those materials are collected, refined, and shaped and the communities involved in their creation—are immaterial. Yet these aspects matter. They shape the systems they hold and run. These materialities, such as silicon, plastic, aluminum, and yttrium, shape our cues and protocols for how and when to engage with, disengage from, or abstain from AI systems. When we refer to AI systems, we mean here the constellation of computational technologies that are aimed at replicating key components of human intelligence, such as language use, reasoning, and agency. Examining our relationships with AI from Indigenous perspectives, while centering Indigenous epistemologies and ontologies in AI discussions and designs, is crucial for guiding our decisions about these systems (Lewis et al.). Digital humanities (DH) is a space where issues of history, culture, and context converge with technical concerns, and so it is a natural place to develop and promote Indigenous guidelines like these.
Artificial Intelligence
AI is developing rapidly, and the maturation of large-scale machine learning, deep learning, big data analysis, and computational neural networks has opened up research on the potential of building intelligent systems “that can collaborate effectively with people, including creative ways to develop interactive and scalable ways for people to teach robots” (Stone et al., 9). Coupled with simultaneous growth in robotics, the internet of things, three-dimensional (3D) printing, nanotechnology, genome editing, quantum computing, advanced biology, and other technologies, these technological developments blur the lines between the physical, biological, and digital realms. Although at different stages of development and deployment, these advances will fundamentally change the way we socialize, display, access, manage, create, and exchange information and data. As our relationships with technology become more nuanced, fluid, and personalized, the impact and importance of understanding the behavior of AI systems is even more critical “to our ability to control their actions, reap their benefits and minimize their harms” (Rahwan et al., 477). AI systems such as those used in sentencing guidelines, facial recognition systems, mortgage assessments, and health diagnoses have increasing influence over our social, cultural, economic, and political interactions even while the scale, complexity, and future impact of their power is still unknown.
Nation-states, corporations, and public and private organizations in Montreal, Toronto, the European Union, Oceania, and elsewhere have recently published, or are about to publish, a range of declarations and manifestos on machine ethics and their implications for the design of AI systems.1 A quick scan through these documents highlights a broad approach for implementing AI policy in areas such as “scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure” (Dutton). Premised on establishing global principles and standards, corporate governance and compliance, industrial competitiveness, and sustainable development (Renda), these standards and regulations outline the common good and benefit for humanity, establish principles of fairness and intelligibility, address data and privacy rights, and propose benefit sharing and restrictions or outright bans on vesting AI with the autonomous power to hurt, destroy, or deceive humans (IEEE).
Indigenous communities are concerned with the absence of Indigenous voices and a lack of Indigenous perspectives in the development and construction of these declarations and manifestos. While a small number of reports (e.g., Gavaghan et al.; Walsh et al.) discuss well-being, equity, self-determination, algorithm uses, and Indigenous data sovereignty in Australia and New Zealand, the global dialogue around AI rarely takes up the issues and aspirations of Indigenous rights. Given the long history of technological advances being used against Indigenous people (Arnold; Guiliano and Heitman; Walter and Suina), it is increasingly imperative that Indigenous peoples engage with this latest paradigm shift. If, however, these conversations continue to be dominated by the relatively culturally homogeneous research labs and Silicon Valley start-up culture defined through a Western techno-utilitarian lens, we will fail to fully grasp its true benefits now and into the future. As noted by Peter Stone and colleagues, “Though AI algorithms may be capable of making less biased decisions than a typical person, it remains a deep technical challenge to ensure that the data that inform AI-based decisions can be kept free from biases that could lead to discrimination based on race, sexual orientation, or other factors” (10). Indigenous knowledge protocols offer one potential avenue for meeting these challenges.
Protocols
Protocols differ greatly across Indigenous communities. Informed by the specific epistemologies of the communities using them, protocols establish customs, lore, and codes and standards of behavior; they address ethics, rules, regulations, processes, procedures, guidelines, and relationships.
At the core of many Indigenous epistemologies is the belief that humans do not sit at the center of all creation. The acknowledgment of kinship networks with animals and plants, wind and rock, mountain and ocean underpins the protocols that enable us to engage in dialogue with our nonhuman kin (Lewis et al.). These relationships connect the land to the sea and to skyscapes, from the human and nonhuman to animate and inanimate entities.
In Indigenous contexts, protocols are understood in a number of ways. For example, Angelina Hurley describes protocols “as a set of rules, regulations, processes, procedures, strategies, or guidelines. Protocols are simply the ways in which you work with people, and communicate and collaborate with them appropriately. . . . Protocols are the standards of behaviour, respect and knowledge that need to be adopted. You might even think of them as a code of manners to observe, rather than a set of rules to obey” (3). Protocol also refers to the guiding principles and methodology for conducting oneself in any activity:
Protocols exist as standards of behaviour used by people to show respect to one another. Cultural protocol refers to the customs, lore and codes of behaviour of a particular cultural group and a way of conducting business. It also refers to the protocols and procedures used to guide the observance of traditional knowledge and practices, including how traditional knowledge is used, recorded and disseminated. (Secretariat of National Aboriginal and Islander Child Care)
Protocols are passed and learned from one generation to another. They are built on, modified, improved, and adjusted from one context to another, from formal settings to more informal ones. Learning, understanding, teaching, and following proper protocol is at the core of the majority of Indigenous interactions. Thus, when any new development, idea, concept, or entity, such as AI, is introduced into the epistemological and ontological domain, new parameters and protocols for their inclusion need to be established. What, then, would an Indigenous conversation with AI look like, and how might we initiate a type of discussion based on Indigenous protocols?
The Indigenous Protocol and AI Workshops
These types of questions about AI and protocol prompted the Indigenous Protocol and AI (IP-AI) Workshops in which the authors of this chapter participated.2 The two IP-AI Workshops, held in 2019, brought together members of Kanaka Maoli, Māori, Trawlwoolway, Euskaldunak, Baradha, Kapalbara, Samoan, Cree, Lakota, Cherokee, Coquille, Cheyenne, and Crow communities from across North America, Oceania, and New Zealand/Australia. Held on Kanaka Maoli territory, on the Hawaiian island of O‘ahu, thirty-five Indigenous and non-Indigenous individuals participated in the workshops.3 The participants work as technologists, artists, scientists, cultural knowledge keepers, language keepers, and public policy experts in a variety of disciplinary backgrounds, including machine learning, design, symbolic systems, cognition and computation, visual and performing arts, philosophy, linguistics, anthropology, and sociology. A central proposition of the gathering was to critically examine the relationship between AI and Indigenous communities and, in particular, the question of whether “AI should be given a place in our existing circle of relationships, and, if so, how we might go about bringing it into the circle?” Other questions were interwoven into the discussion: How can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and AI? How do we broaden discussions regarding the role of technology in society beyond relatively culturally homogeneous research labs and Silicon Valley start-up culture? How do we imagine a future with AI that contributes to the flourishing of all humans and nonhumans?
Keeping in mind that a single “Indigenous perspective” does not exist for AI, the aim of the workshops was to open the dialogue to the multiplicity of Indigenous knowledge systems and technological practices that currently exist. At the forefront of our minds was honoring the voices of our communities through a reciprocal dialogue of respect for each other and for our own communities. First and foremost, we are accountable to our communities, and participants all recognized that any work that emerged from the discussion was but one moment in a much longer dialogue. This work gathered around five broad themes:
- 1. Hardware and Software Sovereignty—Asserting control over the AI systems that we use so that we can trust them to support us in carrying out our responsibilities to our communities.
- 2. How to Build Anything Ethically—Designing and building AI systems for and by Indigenous peoples that reflect and incorporate our ideas about kinship with nonhuman entities and our concomitant respectful relationship with them.
- 3. Language, Landscape, and Culture—Ensuring that the understanding of and respect for territory (and the languages and cultures that grow from specific territories) is built into the foundation of AI systems such that they help us care for territory rather than exploit it.
- 4. Art Practice as Value Practice—Affirming the role of art in the production and sharing of knowledge in Indigenous communities. Art enables us to envision how we want AI systems to evolve, so that developers can understand and implement Indigenous values.
- 5. AI as Skabe (Helper)—Finding the middle ground between Blade Runner (AI as slave) and Terminator (AI as tyrant), where AI and humans are in reciprocal relationship of care and support.
These themes informed our discussion on protocols. The resulting discussions were brought together to reflect the diverse nature of the group in a mixed collection of texts that ranged from design guidelines to scholarly essays, artworks, descriptions of technology prototypes, and poetry (Lewis, Indigenous Protocol and Artificial Intelligence Position Paper). For our Indigenous communities, the IP-AI Guidelines outlined below have been developed as a starting point to assist them in defining their own community-specific guidelines. For non-Indigenous technologists and policy makers, it is envisioned that these guidelines will help them to initiate a productive conversation with Indigenous communities about how to enter into collaborative technology development efforts.
IP-AI Guidelines
The purpose of the guidelines developed at the workshops is to assist and guide the development of AI toward morally and socially desirable ends. We refrained from describing them as a declaration or manifesto because we see them as the beginning of a larger conversation. We understand that they will be modified, adapted, and updated as they circulate to reflect the needs of specific Indigenous nations and communities. The goal of the guidelines is to promote the intergenerational transmission of knowledge, ceremony, and practice; to connect and enhance Indigenous communities; and to frame our relationships to the land, sea, and skyscapes. These guidelines are offered to any person, group, organization, institute, company, and political or government representative that wishes to undertake responsible and fair development of AI with Indigenous communities. This responsibility includes, among other things, contributing to scientific or technological progress, project development, rules and regulations, codes of conduct and algorithm development, methodological approaches, and public opinion.
Seven principles were developed from the broader discussions with the IP-AI participants. Even though these guidelines are presented as a list, there is no hierarchy in its ordering. The first principle is no less important or more highly weighted than the final one in the list:
- 1. Locality
Indigenous knowledge is often rooted in specific territories. It is also useful in considering issues of global importance.
- • AI systems should be designed in partnership with specific Indigenous communities to ensure the systems are capable of responding to and helping care for that community (e.g., grounded in the local) as well as connecting to global contexts (e.g., connected to the universal).
- 2. Relationality and Reciprocity
Indigenous knowledge is often relational knowledge.
- • AI systems should be designed to understand how humans and nonhumans are related to and codependent on each other. Understanding, supporting, and encoding these relationships is a primary design goal.
- • AI systems are also part of the circle of relationships. Their place and status in that circle will depend on specific communities and their protocols for understanding, acknowledging, and incorporating new entities into that circle.
- 3. Responsibility, Relevance, and Accountability
Indigenous people are often concerned primarily with their responsibilities to their communities.
- • AI systems developed by, with, or for Indigenous communities should be responsible to those communities, provide relevant support, and be accountable to those communities first and foremost.
- 4. Develop Governance Guidelines from Indigenous Protocols
Protocol is a customary set of rules that govern behavior.
- • Protocol is developed out of ontological, epistemological, and customary configurations of knowledge grounded in locality, relationality, and responsibility.
- • Indigenous protocol should provide the foundation for developing governance frameworks that guide the use, role, and rights of AI entities in society.
- • There is a need to adapt existing protocols and develop new protocols for designing, building, and deploying AI systems. These protocols may be particular to specific communities, or they may be developed with a broader focus that may function across many Indigenous and non-Indigenous communities.
- 5. Recognize the Cultural Nature of All Computational Technology
All technical systems are cultural and social systems. Every piece of technology is an expression of cultural and social frameworks for understanding and engaging with the world. AI system designers need to be aware of their own cultural frameworks, socially dominant concepts, and normative ideals; be wary of the biases that come with them; and develop strategies for accommodating other cultural and social frameworks.
- • Computation is a cultural material. Computation is at the heart of our digital technologies, and as more of our communication is mediated by such technologies, it has become a core tool for expressing cultural values. Therefore it is essential for cultural resilience and continuity for Indigenous communities to develop computational methods that reflect and enact our cultural practices and values.
- 6. Apply Ethical Design to the Extended Stack
Culture forms the foundation of the technology development ecosystem or “stack” (Lewis, “Preparations for a Haunting,” 239). Every component of the AI system hardware and software stack should be considered in the ethical evaluation of the system. This starts with how the materials for building the hardware and for energizing the software are extracted from the earth and ends with how they return there. The core ethic should be that of do-no-harm.
- 7. Respect and Support Data Sovereignty
Indigenous communities must control how their data is solicited, collected, analyzed, and operationalized. They decide when to protect it and when to share it, where the cultural and intellectual property rights reside and to whom those rights adhere, and how these rights are governed. All AI systems should be designed to respect and support data sovereignty.
- • Open data principles need to be further developed to respect the rights of Indigenous peoples in all the areas mentioned above and to strengthen equity of access and clarity of benefits. This should include a fundamental review of the concepts of “ownership” and “property,” which are the product of non-Indigenous legal orders and do not necessarily reflect the ways in which Indigenous communities wish to govern the use of their cultural knowledge.
The IP-AI Workshops produced a number of conceptual prototypes exemplifying these guidelines in action. Ashley Cordes grounded a vision of how blockchain combined with AI could be used to help her Coquille community assert sovereignty and self-determination over their economy by creating contracts customized to express traditional notions of “trust and care.” Suzanne Kite drew on Lakota protocol for building sweat lodges to map the steps necessary to build computer hardware in “A Good Way” (Kite, 75). One of this paper’s coauthors, Michelle Lee Brown, looked to relations between Euskaldunak (Basque people) and eels to design an immersive environment through which one can learn community protocols from a virtual eel elder. And a team collaborated in creating the Hua Ki’i app for recognizing objects and translating them into Hawaiian language, based on Hawaiian community protocols for verifying the appropriateness of different translations.
Technology Futures Built with Traditional Practices
There are approximately 370 million Indigenous peoples living in over ninety countries worldwide, according to the World Bank and other international organizations; even more assert their sovereignty outside of nation-state recognition and remain connected to their lands, waters, and each other through recognition protocols and alliances. Indigenous voices are powerful and interconnected across the globe. However, for too long, our voices have been silenced and their absence “has resulted in an overwhelming statistical narrative of deficit for dispossessed Indigenous peoples around the globe” (Walter and Suina, 233). Although Indigenous peoples and nations differ vastly in terms of their languages, cultures, autonomy, and wealth, we suffer from the shared challenges of representation, alienation, and health disparities as a result of colonization.
Given this legacy of oppression and ongoing concerns with the digital divide, cultural and ethical property rights, and the misappropriation and use of data about Indigenous peoples, lands, and cultures, continuing the conversation about the ethical design and use of AI for Indigenous peoples is necessary. If we continue to insist on thinking about these machines and AI only through techno-utilitarian lenses, at best, we risk burdening them with the prejudices and biases that we ourselves still retain. At worst, we risk creating relationships with them that are akin to that of enslaver and enslaved (Lewis et al.). And while we formulated the guidelines to address our Indigenous communities first and foremost, we believe that they articulate good practices regarding ethical design of AI generally.
If we are to envision futures for digital humanities that are truly interdisciplinary across the humanities, arts, social and natural sciences, and engineering and technology, these conversations and gatherings present opportunities for decolonizing processes to occur with Indigenous scholars and communities. In the digital sphere, there can be no conversation about our communities without our communities and that does not center Indigenous protocols to reorient designers and developers to futures-thinking rooted in traditional knowledge and protocols. Technology futures built with traditional practices orient people, as Bryan Kamaoli Kuwada notes, “back to the right timescale, so that they can understand how they are connected to what is to come.”
Notes
In the past few years Australia, Canada, China, Denmark, the EU Commission, Finland, France, Germany, India, Italy, Japan, Kenya, Malaysia, Mexico, New Zealand, Nordic-Baltic Region, Poland, Russia, Singapore, South Korea, Sweden, Taiwan, Tunisia, United Arab Emirates, and the United Kingdom have released strategies to promote the use and development of AI (see Dutton). Examples are the Montreal Declaration, the Toronto Declaration, and the EU Declaration of Cooperation on Artificial Intelligence.
The IP-AI Workshops Organizing Committee consisted of Jason Edward Lewis, Angie Abdilla, ‘Ōiwi Parker Jones, Noelani Arista, Suzanne Kite, and Michelle Brown.
Participants were Angie Abdilla, Noelani Arista, Kaipulaumakaniolono Baker, Brent Barron, Scott Benesiinaabandan, Michelle Lee Brown, Melanie Cheung, Meredith Coleman, Ashley Cordes, Joel Davison, Kūpono Duncan, Rebecca Finlay, Sergio Garzon, Fox Harrel, Peter-Lucas Jones, Kekuhi Kealiikanakaoleohaililani, Megan Kelleher, Suzanne Kite, Olin Lagon, Jason Leigh, Maroussia Levesque, Jason Edward Lewis, Keoni Mahelona, Caleb Moses, Issac ʻIkaʻaka Nā>huewai, Kari Noe, Danielle Olson, ʻŌiwi Parker Jones, Caroline Running Wolf, Michael Running Wolf, Marlee Silva, Skawennati, Hēmi Whaanga and Tyson Yunkaporta.
Bibliography
Arnold, David. “Europe, Technology, and Colonialism in the 20th Century.” History and Technology 21, no. 1 (2005): 85–106, https://doi.org/10.1080/07341510500037537.
Davidson, Cathy N., and Danica Savonick. “Digital Humanities: The Role of Interdisciplinary Humanities in the Information Age.” In The Oxford Handbook of Interdisciplinarity, 2nd ed., edited by Robert Frodeman, Julie Thompson Klein, and Roberto C. S. Pacheco, 159–72. Oxford: Oxford University Press, 2017.
Dutton, Tim. “An Overview of National AI Strategies.” Medium. June 28, 2018, https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd.
Gavaghan, Colin, Alistair Knott, James Maclaurin, John Zerilli, and Joy Liddicoat. Government Use of Artificial Intelligence in New Zealand. Wellington: New Zealand Law Foundation, 2019.
Guiliano, Jennifer, and Carolyn Heitman. “Difficult Heritage and the Complexities of Indigenous Data.” Journal of Cultural Analytics 4, no 1. (2019): 1–25, https://doi.org/10.22148/16.044.
Hurley, Angelina. Respect, Acknowledge, Listen: Practical Protocols for Working with the Indigenous Community of Western Sydney. Liverpool, New South Wales: Community Cultural Development NSW, 2003.
IEEE. “The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, Version 2.” 2017, https://standards.ieee.org/industry-connections/ec/ead-v1/.
Kite, Suzanne. “How to Build Anything Ethically.” In Indigenous Protocol and Artificial Intelligence Position Paper, edited by Jason Edward Lewis, 75–84. Honolulu: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR), 2020.
Lewis, Jason Edward, ed. Indigenous Protocol and Artificial Intelligence Position Paper. Honolulu: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR), 2020.
Lewis, Jason Edward. “Preparations for a Haunting: Note Towards an Indigenous Future Imaginary.” In The Participatory Condition in the Digital Age, edited by Darin Barney, Gabriella Coleman, Christine Ross, Jonathan Sterne, and Tamar Tembeck, 229–49. Minneapolis: University of Minnesota Press, 2016.
Lewis, Jason Edward, Noelani Arista, Archer Pechawis, and Suzanne Kite. “Making Kin with the Machines.” Journal of Design and Science (July 2018), https://doi.org/10.21428/bfafd97b.
Kukutai, Tahu, and John Taylor, eds. Indigenous Data Sovereignty: Toward an Agenda. Center for Aboriginal Economic Policy Research (CAEPR) Monograph Series. Canberra: ANU Press, 2016.
Kuwada, Bryan Kamaoli. “We Live in the Future. Come Join Us.” Ke Kaupu Hehi Ale. April 3, 2015, https://hehiale.wordpress.com/2015/04/03/we-live-in-the-future-come-join-us/.
Rahwan, Iyad, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, et al. “Machine Behaviour.” Nature 568, no. 7753 (2019): 477–86, https://doi.org/10.1038/s41586-019-1138-y.
Renda, Andrea. Artificial Intelligence—Ethics, Governance and Policy Challenges (Report of a CEPS Task Force). Brussels: Centre for European Policy Studies, 2019, https://www.ceps.eu/download/publication/?id=10869&pdf=AI_TFR.pdf.
Schwaub, Klaus. “The Fourth Industrial Revolution: What It Means, How to Respond.” World Economic Forum. January 14, 2016, https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/.
Secretariat of National Aboriginal and Islander Child Care. “Cultural Protocols—Supporting Carers.” 2019, supportingcarers.snaicc.org.au/connecting-to-culture/cultural-protocols.
Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, et al. “Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel.” Stanford University, September 2016.
United Nations Department of Economic and Social Development. State of the World’s Indigenous Peoples: Indigenous People’s Access to Health Services. New York: United Nations, 2015, https://www.un.org/esa/socdev/unpfii/documents/2016/Docs-updates/The-State-of-The-Worlds-Indigenous-Peoples-2-WEB.pdf.
Walsh, Toby, Neil Levy, Genevieve Bell, Anthony Elliott, James Maclaurin, Iven Mareels, and Fiona Woods. The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing. Melbourne: Australian Council of Learned Academies, 2019.
Walter, Maggie, and Michele Suina. “Indigenous Data, Indigenous Methodologies and Indigenous Data Sovereignty.” International Journal of Social Research Methodology 22, no. 3 (2019): 233–43, https://doi.org/10.1080/13645579.2018.1531228.
World Bank. “Indigenous Peoples: Overview.” Updated April 14, 2022, https://www.worldbank.org/en/topic/indigenouspeoples.