Skip to main content

Debates in the Digital Humanities 2016: 35. Toward a Cultural Critique of Digital Humanities | Domenico Fiormonte

Debates in the Digital Humanities 2016

35. Toward a Cultural Critique of Digital Humanities | Domenico Fiormonte

35

Toward a Cultural Critique of Digital Humanities

Domenico Fiormonte

The English texts of non-native writers cannot be assumed to reflect their vernacular discourses.

—A. Suresh Canagarajah, Geopolitics of Academic Writing

[Nisargadatta] Maharaj said quite often that books get written; they are never written by authors. Only a little thought is necessary to see the truth of what he meant. He was NOT referring only to books on spiritual matters; he was referring to all books. In the overall functioning of the manifested universe, whatever was necessary as written or spoken words appeared spontaneously. . . . No credit or blame could attach to any individual writer for the simple reason that the individual is a mere illusion and has not autonomous existence.

—Ramesh S. Balsekar, Experience of Immortality

The Last Dinosaurs

Is there a non-Anglo-American digital humanities (DH), and if so, what are its characteristics? A number of eminent European scholars, especially Dino Buzzetti, Tito Orlandi, and Manfred Thaller, identify differences in methodology as the key criterion distinguishing several DH approaches (i.e., continental versus Anglo-American DH). I agree that methodological issues are very important, but I do not see our two opinions (i.e., methodological versus cultural monopolization) in strong conflict. Of course, much depends on what we mean by “culture” and “methodology.” There has always been an attempt in Anglo-American DH/Humanities Computing to maintain a methodological dominion (and dominance) in terms of applications, standards, and protocols. This is natural in any situation of competition. Besides, to assume that the root of this dominion is cultural does not mean to deny that a methodological monopolization exists. But where do the “monopolies” come from? Methodologies are sets of interconnected discourses about procedures and rules that arise from dominant cultural visions. Any distinction or genealogic attempt reminds me of the obstinate persistence of the Aristotelian form/content dichotomy, and this simple dualistic approach does not reflect the multilayered nature of cultural objects. One of the core assumptions of my own approach to DH is that any human-born knowledge (including computer science) is subject to the cultural law of the artifact (Vygotsky, Mind in Society; Thought and Language). This law affirms that both material and cognitive artifacts produced by humans are subject to the influence of the environment, culture, and social habits of the individual and groups that devise and make use of them. The artifact influences and at the same time is influenced by its context; in other words, technology is always a part of culture, not a cause or an effect of it (Slack and Wise, 4, 112).

Given this perspective, it is clear that answering our opening question (is there a non-Anglo-American DH?) is far less simple than one could expect. From the historical point of view, it would be easy to answer “yes” since, for example, Italian “Informatica Umanistica” has a strong tradition and a long history.1 But from the point of view of the scientific results, research projects, and institutional presence, Informatica Umanistica, like most of the “other” DH practiced in the world, practically does not exist. The reason for such a partial or total invisibility (depending, of course, on the countries and the observer) is no mystery: the indisputable Anglo-American hegemony in the academic research field. This phenomenon, certainly complex and debated, is perfectly summarized by Suresh Canagarajah (Geopolitics of Academic Writing) in the famous story of Chinese dinosaurs. In April 1997, the New York Times published an article titled “In China, a Spectacular Trove of Dinosaurs Fossils Is Found.” Although the discovery had been made around one year earlier, the American newspaper reported the news in that moment (April 25, 1997) because it was publicly announced by Western scientists the day before, during a conference at the Academy of Natural Sciences at Philadelphia:

The discovery had been made by a Chinese farmer. The date he discovered the site is not given anywhere in the report. His name is also not given. The name of the international team and their university affiliations are, on the other hand, cited very prominently. . . . When the newspaper claims that “the spectacular trove was not announced until today” there are many questions that arise in our minds. Announced by whom? To whom? . . . The whole world, it is claimed, knows about the fossils after the announcement at the Philadelphia conference. It is as if the finding is real only when the West gets to know about it. It is at that point that the discovery is recognized as a “fact” and constitutes legitimate knowledge. Whatever preceded that point is pushed into oblivion. (Canagarajah, 1–2)

Having said that, the aim of this paper is not to question the prestige of Anglo-American colleagues, reverse the current hierarchies, or propose new and more objective rankings of DH work and geopolitical presence. Peripheral cultures do not need any revenge or, worse, any seat at the winner’s table, and that is why the aim of this paper is simply to acknowledge a situation, evaluating it for itself and perhaps suggesting that a different model is possible.

Forms of the Crisis

In the last ten years, the extended colonization, both material and symbolical, of digital technologies has completely overwhelmed the research and educational world. Digitalization has become not only a vogue or an imperative, but a normality. In this sort of digital “gold rush,” the digital humanities perhaps have been losing their original openness and revolutionary potential. If we want to win them back and, at the same time, move forward, it is important to start from the analysis of the most relevant DH bottlenecks.

The first identifiable gap has to do with the slight tendency of DH to develop what French sociologist Pierre Bourdieu has called “a theoretical model for reflecting critically on the instruments through which we think of reality” (Bourdieu and Chartier, 47). Or rather, when new tools are created, one reflects on their use or their impact, but what is most important, namely their cultural foundation, is only rarely considered. In other words, it is as if DH has always started from the “results” without considering the entire process that led to them. Alan Liu in the previous volume of Debates expressed concerns about the “lack of cultural criticism” (Liu, 492) of DH, and with a very appropriate image (in complete contrast to mainstream tendencies), calling for the foundation of an “intellectual infrastructure” for the digital humanities. By underlining, among other things, the “political” limits of the instrumentalist approach, Liu’s article was a breath of fresh air in the Anglo-American context, even though it did not address the geopolitical unbalance and the economic interests that operate at the heart of the DH system.

The reluctance of DH to reflect on the origins of its objectives has various causes, but there is no doubt that the historical character of the humanities disciplines has contributed to an excessive concentration on conservation, management, and data analysis, while neglecting the more revolutionary contribution (in both a positive and a negative sense) of computing and its capacity to affect research processes even before they produce anything.2

Another, more concrete, limitation concerns the geopolitical and the cultural-linguistic composition of the discipline, and hence the tools used (Fiormonte, “Il dibattito”). The problems here, although deeply entangled, are of two types:

a) The composition of the government organs, institutions, etc., inspiring and managing the processes, strategies, and ultimately the research methodologies (thus affecting also the visibility of the results).

b) The cultural-linguistic nuances and features of the tools (see Fiormonte, “Il testo digitale”; “Chi l’ha visto?”).

Within this second category one can also identify:

b1) The cultural and political problem of software and platform (e.g., social networks) almost exclusively produced in the Anglo-American environment.

b2) The cultural-semiotic problem of the different tools of representation from the icons of the graphical interfaces to the Unicode standards, from the proxemics of Second Life to the universal concept of usability, etc.

The following section will focus primarily on the DH governance and the cultural-semiotic problems, sacrificing for reasons of space the other important issues. But proceeding in order, we start with the institutional and organizational structures.

Geopolitics of DH (and Beyond)

The influences of the coding system are in general pervasive, because they are usually accepted as the unquestioned standard. Each medium and its corresponding technical realization, as Harold Innis explained, implies a bias of communication, and is subject to the “cultural law” mentioned previously. A banal example is the long dominion of the 7 bit ASCII code (American Standard Code for Information Interchange), which has been the character set used by most computing platforms—the Web included—for more than forty years. The same technocultural bias also affects most of the services and instruments of the network (e.g., the domain name system). For the last forty years, it has not been possible to use accented vowels in a URL address, and in spite of IETF and ICANN efforts3 the new Internationalizing Domain Names in Applications (IDNA) system can be implemented only in applications that are specifically designed for it, and is hardly used in Latin alphabet-based URLs. Some of the original top-level domains can be used only by U.S. institutions. For example, a European university cannot use the top-level domain .edu, which was and is still reserved only for U.S. academic institutions. The domain .eu could not be used as a top-level domain until 2006, and applications for top-level domains using characters outside of ISO-Latin were invited only more recently (requests were open from January 12 to April 12, 2012). The Internet Corporation for Assigned Names and Numbers (ICANN) finally allowed the opening up of top-level domains to Arabic or Chinese characters, included in Unicode, but every decision seems in the hands of an organization under the clear control of Western (and especially U.S.) industries and governments. We see evidence of this control in many ways. For example, the request procedure is very complicated, many of the rules are described only in English, the cost of the application is $185,000, and the application does not guarantee that the request will be accepted. The applying institution needs to show a clear technical and financial capability that must be certified discretionally by ICANN itself. The problem is that ICANN, although it has always taken decisions of global relevance, still lacks clear institutional and multi-stakeholder accountability.4 ICANN was founded in Marina del Rey, California, in September 1998, and up to 2009 was controlled by the U.S. Department of Commerce.5 Until 2012 the CEO of ICANN was Rod Beckstrom, past president of the National Cybersecurity Center (NCSC) for the Department of Homeland Security—an impeccable pedigree for a cybercop but a less-than-desirable qualification for a manager of a shared resource such as the Internet.6

And in areas closest to the hearts of humanists, the power structures do not appear to be any less discouraging. Unicode is a case in point. By its own definition the Unicode Consortium is, at least in theory, a nonprofit organization “devoted to developing, maintaining, and promoting software internationalization standards and data, particularly the Unicode Standard, which specifies the representation of text in all modern software products and standards.”7 Its board of directors is currently composed of representatives from Google (one member), Microsoft (two members), Apple (one member), Intel (one member), IBM (one member), and IMS/Appature (one member). President of the Executive Office is Mark Davies, founder of Unicode and Google engineer since 2006. UC Berkley is the only educational organization represented (one of several Technical officers),8 and no public institution appears to be represented either on a technical or managing level. Seen realistically, Unicode is an industrial standard controlled by the industry. And claims about the neutrality or impartiality of this organization appear to be at least questionable.

If this can be taken as a credible example of the global situation, it is clear that digital humanities must also be affected, as most of us are paid by public institutions that have educational aims but are forced to work with protocols, standards, and tools originally designed for the commercial interests of (mostly Anglophone) private companies.

Compared to a survey carried out in 2001 (Fiormonte, “Il dibattito”), even though so much effort has been expended in making existing DH associations and organizations more international, the impression remains the same: a solid Anglo-American stem onto which several individuals of mostly European countries are grafted. Figure 35.1 shows how boards and committees of the eight top DH international organizations (four associations, one network, one consortium, and two journals) are composed.9 The data are organized by country of institutional affiliation—that is, what is shown is not the country of origin of the member, but the place where the individual appears to work. Figure 35.2 aggregates data from the same organizations and shows the effect of multiple appointments; namely, how committees and boards tend to replicate themselves, sometimes appointing the same people for up to five different organizations. These roughly collected data may be insufficient to demonstrate that current top DH organizations suffer from ethnocentrism, but they certainly point out a geopolitical and linguistic unbalance.

Figure 35.1. DH organizations: presence of individuals by country of institutional affiliation.

Of course, initiatives such as CenterNet, ADHO (Alliance of Digital Humanities Organizations10), and CHAIN (Coalition of Humanities and Arts Infrastructures and Networks) have the merit of gathering and registering the major realities of the Anglophone axis (USA–Canada–U.K.–Australia), but this is just a self-strengthening operation of existing identities rather than actual knowledge sharing or exploration of other cultures, methodologies, or practices.

Consider also the monolingualism of the aforementioned sites and organizations. Their rhetorical structure does not leave space for anything but the “inner” Anglo-American rhetoric and academic narrative (Canagarajah, 109–127). Furthermore, the self-report of some initiatives, such as Melissa Terras’s flatland, contributed to presenting the digital humanities as an empire made of two macro-kingdoms, the United States and the United Kingdom, about which orbit a few satellites.

These sort of universalistic representations (or self-representations) appear only to reveal the actual state of subordination from which non-English-speaking digital humanists suffer; a situation that is triggered the very moment we use the label “digital humanities.”

One exception to this scenario is the THATCamp11 un-conference series, which is becoming a good opportunity for peripheral communities to share alternative views of what the digital humanities are or could be.12 This seemed to be the case of the Humanistica.eu initiative, a project launched at THATCamp Florence in 201113 for creating a European Association of DH: a “new common space for nurturing and practicing this discipline from a genuine multi-cultural and multi-lingual perspective,” as can be read on the website.

Figure 35.2. Multiple or cross-appointments top-list. In cross-hatching are shown people who appear in five organizations; those in four organizations are shown with dots; and those with stars are in three organizations.

In this perspective, I could go on with further considerations regarding the cultural and epistemic bias implied in the markup languages as well as in the solutions proposed by TEI.14 However, I would rather focus on relaunching the fundamental question of the importance, especially in the humanities and social sciences, of the residual categories.

Any attempt to create an obligatory system of classification, rigid and universal, will result in residual categories. . . . It is necessary to root the awareness of what happens every time one tries to standardize. In other words, that in this creation there is someone who wins and someone who loses. This not a simple question, nor a matter easy to analyze. (Bowker and Star, “Intervista su società,” 13)

The problem of the crisis (which is also one of self-esteem) in the humanities15 could be summarized as the constitutive necessity to continue to exist, to be always on the margins, to be a hybrid, a variant of the system. And it is here that the first obstacle arises: the potential friction between the role of DH and that of the humanities, because it is clear that a revival or at least a revitalization, which is not a simple defense of what already exists, cannot be realized without a critique of the economical and geopolitical interests that lie beyond the universe of the Internet and its applications. It appears that digital humanities is the victim of a continuous paradox: demonstrating an ability to keep up with technologies (and with their owners and gatekeepers) and, at the same time, not to become subject to them.

Figure 35.3. M. Terras’s DH Graph. The complete graph is available online: http://melissaterras.blogspot.in/2012/01/infographic-quanitifying-digital.html.

Standards and Cultural Hegemonies

According to G. Bowker and S. Leigh Star, “classifications and standards are material, as well as symbolic,” and their control “is a central, often underanalyzed feature of economic life” (Bowker and Star, Sorting Things Out, 15, 39). In their studies the two sociologists show how classification techniques (and the standards generated from them) have always played a fundamental economic and sociocultural role. Current digital technologies standards appear to be the result of a double bias: the technical one and the cultural one (geopolitical). These two biases are entangled and it is almost impossible to discern where the technological choice begins and where the cultural prejudice ends.

As the lexicographer and blogger José Antonio Millán noticed more than fifteen years ago, “while networks are the highways of digital goods and service flows, technologies linked to the user’s language are their compulsory tolls” (Millán, 140). Thus, at the roots of economic, social, and political primacy we do not find “just” technology, but rather the mix of copyrighted algorithms and protocols that manipulate and control languages. In this perspective, standards are the result of a balance of powers.16 Presiding over linguistic technologies has thus become both a profitable business and a geopolitical matter. As Millán states, for many countries, not investing in this sector presently means being forced to pay to be able to use one’s own language.

“Localization still matters,” and the researchers of the Language Observatory Project (http://gii2.nagaokaut.ac.jp/gii/blog/lopdiary.php) have noted that although Unicode is recognized as a step forward for multilingualism, “many problems in language processing remain”:

The Mongolian language, for example, is written either in Cyrillic script or in its own historical and traditional script, for which at least eight different codes and fonts have been identified. No standardisation of typed fonts exists, causing inconsistency, even textual mistranslation, from one computer to another. As a result, some Mongolian web pages are made up of image files, which take much longer to load. Indian web pages face the same challenge. On Indian newspaper sites proprietary fonts for Hindi scripts are often used and some sites provide their news with image files. These technological limitations prevent information from being interchangeable, and lead to a digital language divide. (Yoshiki and Kodama, 122–23)

The Italian semiologist Antonio Perri has offered convincing examples of the cultural bias of the Unicode characters representational system, showing the concrete risks of oversimplifying and drying up of the “phenomenological richness of human writing practices” (Perri, 747). Perri analyzed a number of encoding solutions proposed by the Unicode consortium for different problems related to Indian subcontinental scripts and Chinese, Arabic, and Hangul (Korean writing). In all these cases, in addition to being excessively dependent on visualization software, which raises problems of portability, he showed that the Unicode solutions were based on a “hypertypographic” concept of writing—that is, Western writing embodied in its print form and logical sequencing. By neglecting the visual features of many writing systems, this view overlooks their important functional aspects. Perri gives a striking example of this bias when discussing Unicode treatment of ligatures and the position of vowel characters in the Devanagari Indic script. Often in Indian systems aspects of a graphic nature prevail over the reading order of the graphemes. As shown in Figure 35.4, in the second glyph the order pronunciation/graphic sequence is reversed. Unicode experts, however, argue that Indic scripts are represented in its system according to a “logical scheme” that ignores “typographic” details. Perri concludes:

But why on earth should the order of characters corresponding to the phonetic segment be considered logical by an Indian literate? Who says that the linearity of Saussure’s alphabetic signifier should play a role in his writing practices? . . . It is therefore all too evident that the alphabetic filter, the rendering software and the automatic process of normalization of Indic scripts are the result of a choice that reflects the need for structural uniformity as opposed to the emic cultural practices of the real user. (Perri, 736; our translation)

Figure 35.4. Two graphemes of Devanagari Indic script as shown in Perri, 735.

Unfortunately, the problem of cultural primacy overflows linguistic boundaries. The pervasiveness of cultural representations and metaphors belonging to the Anglo-American context in all technological appliances and computing tools is a well-known tendency since at least the 1960s. Many familiar elements borrowed from everyday U.S. life were exported to the computer world. We are not speaking here of programming languages or algorithms, where the deep semiotic bias is intrinsically evident (Andersen), but of the “superficial” (and not less subtle) world of icons and graphic user interfaces (GUIs). One example is the manila folder, an ubiquitous object used in all American offices that owes its name to a fiber (manila hemp) commonly used in the Philippines for making ropes, paper products, and coarse fabrics. An object coming from a removed colonial past suddenly, thanks to the Xerox Star desktop,17 later became the metaphor for any computing content: a symbol that conceals the bureaucratic origins of the desktop computer and its unique ties to the cultural imagination of the average U.S. customer. Examples of symbolic cyber-colonization are Second Life facial expressions and user-playable animations, where we find body language gestures that can be only deciphered by expert American native speakers.18 Take, for example, the famous “kiss my butt” animation (see Figure 35.5), where both the verbal expression and the body posture are linked to North American culture and would suggest (at best) deceptive meanings to other cultures.

Figure 35.5. The “kiss my butt” gesture in Second Life.

Language Differences and Global Inequalities

Our last example is not a real example, but a comparative experiment based on two graphic representations. The first image (Figure 35.6) is a map of world income inequalities from the University of California Atlas of Global Inequality database. The second world map (Figure 35.7) is a Wikipedia image based on Ethnologue.com sources, representing linguistic diversity in the world: in dark gray (red in the original map) are shown the eight megadiverse countries that together represent more than 50 percent of the world’s languages, and in lighter gray (blue in the original), areas of great diversity.

Figure 35.6. World Gross Domestic Product in 2004. Source: https://ccrec.ucsc.edu/news_item/uc-atlas-global-inequality-0.

Figure 35.7. Linguistic diversity in the world. Source: http://en.wikipedia.org/wiki/Linguistic_diversity#Linguistic_diversity.

If we overlap these two maps, we can notice that—excluding Australia, where linguistic diversity is due to the enormous number of immigrants from all continents19—the lower income countries of the first map in many cases fit the areas of greater linguistic diversity. In other words: cultural richness does not necessarily match material wealth.20

The comparison between the two maps we have proposed does not seek to suggest superficial and easy conclusions; however, it is legitimate to believe that in some of the poorest areas of the world, in deserts, jungles, and mountains at the margins of our globalized society, a handful of communities continue to cultivate the last resource still entirely in their own hands: language.

Finally, it is not surprising that the world income map overlaps also with the “Quantifying Digital Humanities” infographic produced by the University College London Centre for Digital Humanities: two visual representations of a high-income Western sphere (mostly Anglophone) leading the planet. This seems to confirm Millán’s hypothesis on the strict relation between economic hegemony, technological concentration, and linguistic impoverishment, raising the unapproached question of the internal and external digital humanities divide in Western countries.

Beyond the Alphabetic Machine?

What is the role and the position of DH in the geopolitical scenery presented so far? Notwithstanding the unquestionable expansion of the discipline (Gold), I have the impression that DH has been particularly slow in subsuming the social and political impact of its choices and discourses (the “dark sides” of DH; see Chun and Rhody). Maybe this has also to do with the inevitable repression of a too-bitter truth—that is to say, so far the digital humanities have not succeeded in either strengthening the field of humanities or putting some balance into the power relationships between humanities and computer science.

If, on one hand, the perspective of the “formal methods”21 did not manage to establish an equal dialogue between humanities and computer science, on the other hand it made computer science too shortsighted and even hostile to the so-called digital cultures, relegating the latter to a mere “sociological” question. As pointed out by research done by the ACO*HUM22 (De Smedt et al.) in the nineties, a computer is a “universal machine” and the application of formal methods is the lowest common denominator of DH. However, all forms of oral or written discourses are not reducible or ascribable to a logical structure or the “model” (Buzzetti), but reflect and imply a dynamic interaction between producers, codes, material supports, and audiences. The meaning does not simply emerge from the two processes of analysis and modeling but from cooperation (Halliday). In other words, a discourse is a cultural artifact made of syntax, semantics, and above all, pragmatics, and that is why all the data of human culture are so hard to formalize.

On the other hand, the dependence between the machine and the alphabet goes beyond the mere difficulty or impossibility to be independent from the print model. In fact, according to Giuseppe Longo, such dependence seems to be inscribed in the very DNA of the machine:

So I would like to readdress the fact that the roots of this machine are very old and can be found in the alphabet. First of all, 5,000–6,000 years ago, the alphabet was, for different reasons, an invention comparable to the computer-mediated discretization of knowledge we have now performed. Think of the originality of these first social groups from Mesopotamia who fractioned the linguistic flux, a continuous spoken song, marking certain pitches as the first consonants (C. Herrenschmidt et al. 1996). It was the onset of development and a culture which were quite different to those inherent in the hieroglyphic writing of ideograms which proposed concepts or evoked whole images, situations, or feelings, by means of drawings. Conversely, the alphabet discretizes, subdivides continuous language into insignificant atoms, into the bits which are letters. (Longo, 58)23

As Longo reminds us, the present computational dimension is not the manifest destiny of humankind. Humanists can join other pioneering scientists around the globe who are starting to think “of the next machine: history is not over, with digital computability” (Longo, 60). The implicit flattening of the technological, commercial, and industrial policies, as well as the essentially monocultural origins of the logical and symbolical representations, are obstacles to the expansion of DH beyond the simply instrumental function. I agree with Alan Liu (“Where Is Cultural Criticism”) who says that in order to extend its range of action and be legitimated as an actual discipline, DH needs to contaminate with other disciplines,24 such as social sciences (from Science and Technology studies to Mediology25) and cultural anthropology (especially the variant dealing with the cultural artifacts from André Leroi-Gourhan to Jack Goody, from the ethnography of James Clifford and George E. Marcus to the digital ethnography of Michael Wesch26).

But perhaps the most urgent issue is to stop regarding the methodological and the sociocultural questions as separate. In other words, to stop thinking, paraphrasing Harold Innis, that the digital humanities were born in a vacuum:

Innis happily accepted as a starting point the inevitably ethnocentric bias of social science. . . . He recognized that scholarship was not produced in a historical and cultural vacuum but reflected the hopes, aspirations, and heresies of national cultures. American and British scholarship was based, he thought, on a conceit: it pretended to discover Universal Truth, to proclaim Universal Laws, and to describe a Universal Man. Upon inspection it appeared, however, that its Universal Man resembled a type found around Cambridge, Massachusetts, or Cambridge, England: its Universal Laws resembled those felt to be useful by Congress and Parliament; and its Universal Truth bore English and American accents. Imperial powers, so it seems, seek to create not only economic and political clients but intellectual clients as well. (Carey, 149)

So, are we digital humanist intellectual clients, dinosaurs, or “the next big thing”?27 Personally, I would rather prefer not to choose among these options. Instead, I would like to think of DH as a cultural and political project. We could start with three basic steps: (a) stop being obsessed with large-scale digitization projects and “archiving fever” (Derrida), which will only increase our dependency on private industry standards, products, and of course, funding; (b) improve and cultivate the margins—that is, give more attention to our variegated cultural and linguistic local diversity; (c) help to elaborate a new concept of knowledge as commons. As for (c), Hess and Ostrom provide a set of design principles for common-pool resource institutions:

Clearly defined boundaries should be in place.

Rules in use are well matched to local needs and conditions.

Individuals affected by these rules can participate in modifying the rules.

The right of community members to devise their own rules is respected by external authorities.

A system for self-monitoring member’s behavior has been established.

A graduated system of sanctions is available.

Community members have access to low-cost conflict resolution mechanisms.

Nested enterprises—that is, appropriation, provision, monitoring, and sanctioning, conflict resolution, and other governance activities—are organized in a nested structure with multiple layers of activities. (Hess and Ostrom, 7)

Some of these rules could translate, for example, into a completely different governance system, negotiated and monitored dynamically by the linguistic and culturally diverse communities, rather than by a particularly powerful or successful academic group (today the Anglophone countries dominate ADHO, but tomorrow it could be China or other emerging global players). It is not acceptable that one organization functions as a private club that decides who can become a member and who cannot. This is an old-fashioned notion of academic community that does not reflect the present complexity of DH (Fiormonte, “Dreaming of Multiculturalism”). But it is also a matter of democracy, as the global scenario does not justify the existence of groups working as a legitimacy dispenser or “brand” guardians. In fact, knowledge commons does not only mean that knowledge (from both an abstract and practical point of view) should be treated as a common resource, but that each time communities need to sit down and redefine collectively and dynamically what knowledge(s) is (are) and how each piece of knowledge could be digitally represented. This does not mean, for example, refusing standards, but avoiding standards that take over objects, practices, and processes that they should serve.

So digital humanists are not only responsible for taking care of digital representations of cultural objects, but should engage in reducing the political, economic, and social unbalances produced by those (always partial) representations. If the DH community would start to discuss the possibility of applying some of these principles to its own organizations, projects, and products, a completely new way of thinking and researching would emerge—more respectful of our mutual cultures, more democratic, and more powerful.

Postscriptum

“Did we tell our stories faithfully?” This is a question raised (and tentatively answered) by Lisa Marie Rhody when she wrote on “digital humanities as chiaroscuro” (Chun and Rhody, 16), and it was perhaps with a similar question in mind that I started to work on this material in 2012, following the invitation of Manfred Thaller at the “Cologne Controversies around the Digital Humanities” in the same year. I am grateful to Matt Gold for asking me, three years ago, if I was interested in republishing it, and I am proud to see it now anthologized in this new edition of Debates in the Digital Humanities. For a number of reasons I decided to make only minor revisions to the original text.28 I am aware that some of the key figures provided in this chapters have changed and that the present scenario is moving (hopefully) toward organizations that are more culturally inclusive. However, the reason this article became “infamous” in several DH “inner circles” is precisely that three years ago those figures were real. I could not update or delete them without changing the whole sense of the piece. Perhaps in my defense I can quote Daniel O’ Donnell, present GO::DH chair, who wrote on Humanist: “Domenico’s work has in fact greatly influenced our thinking, his challenges are extremely useful in helping as part of the ongoing process of defining the [GO::DH] project.”29

But if “Toward a Cultural Critique” has somehow influenced the choices made by ADHO, much still need to be changed. All efforts should be recognized and encouraged, but I think we should remember that ADHO is not a democratic organization with elected members, as are most of the academic organizations in the world (including members associations of ADHO). It is still a strange hybrid between an invitation-based private club and a corporate consortium. As I said in various occasions, I would much prefer to see a “federation of diverse associations” instead of applying the “unity in diversity” model. That is why I am suspicious of any “global leadership.”

But above all, I think it would be strategic to support and promote South–South dialogue. I would like to remember here the observations of Octavio Kulesz, author of an important survey of digital publishing in developing countries:

Likewise, the electronic solutions that certain countries of the South have implemented to overcome their problems of content distribution can also serve as a model for others, thus facilitating South–South knowledge and technology transfer. . . . Sooner or later, these countries will have to ask themselves what kind of digital publishing highways they must build and they will be faced with two very different options: a) financing the installation of platforms designed in the North; b) investing according to the concrete needs, expectations and potentialities of local authors, readers and entrepreneurs. Whatever the decision of each country may be, the long term impact will be immense. (Kulesz, 16–17)

I think that similar questions can be applied to the DH world.

So, what kind of DH do we want to build?

Notes

Translated from the Italian by Federica Perazzini and Desmond Schmidt.

1. There is not much information in English available on the history of Informatica Umanistica, but Geoffrey Rockwell has effectively outlined the Italian scenario on a recent post on the Tito Orlandi festschrift: http://www.theoreti.ca/?p=4333.

2. Initiatives like the ADHO Special Interest Group GO::DH (http://www.globaloutlookdh.org) and the DH Awards (http://dhawards.org) show that a sensitivity to cultural and linguistic diversity is growing within the DH international community.

3. “ICANN Bringing the Languages of the World to the Global Internet; Fast Track Process for Internationalized Domain Names Launches Nov 16,” news release, October 30, 2009. http://www.icann.org/en/news/announcements/announcement-30oct09-en.htm.

4. This situation has started to change since the 2013 ICANN reform. There was an evident restyling of the image and a new mission statement was provided: “To broaden the range of stakeholders involved in Internet governance. . . .” The present CEO of ICANN is Fadi Chehadé, a Stanford-educated engineer who has been employed by a number of major IT industries throughout his career. According to Richard Hill, president of the Association for Proper Internet Governance and former ITU senior officer, “for the most part the narratives used to defend the current governance arrangements are about maintaining the geo-political and geo-economic dominance of the present incumbents, that is, of the [United States] and its powerful private companies” (Hill, 35).

5. Frederic Lardinois, “U.S. Department of Commerce Loosens Grip on ICANN,” ReadWrite, September 30, 2009, http://www.readwriteweb.com/archives/commerce_department_loosens_grip_on_icann.php.

6. While Western governments and companies try to preserve their primacy on the Internet, data show a different scenario. In terms of access to the Internet, as of early 2015, Western countries (Europe and USA) represent only 35.7 percent out of the total of the users whereas Asia records 44 percent. (Source: http://www.internetworldstats.com/stats.htm.)

7. Unicode Consortium, http://www.unicode.org/consortium/consort.html.

8. Unicode Consortium, “Directors, Officers, and Staff,” http://www.unicode.org/consortium/directors.html.

9. Association for Computers and the Humanities (ACH), Alliance of Digital Humanities Organizations (ADHO), Association for Literary and Linguistic Computing (ALLC), CenterNet (International Networks of Digital Humanities Centers), Digital Humanities Quarterly, Literary and Linguistic Computing, Society for Digital Humanities/Société pour l’étude des médias interactifs (SDH-SEMI), Text Encoding Initiative (TEI).

10. Since July 2014 ADHO’s Steering Committee has seen a substantial change of its composition. Although Anglo-Americans are still dominant, there was a clear effort toward the inclusion of non-Anglophone voices: http://digitalhumanities.org/administration/steering.

11. http://thatcamp.org.

12. At this writing a quick look at upcoming THATCamps shows that out of thirty-five THATCamps listed on the website on September 2015, twenty-four will take place in the United States.

13. See the relative Manifesto proposed in Paris: http://tcp.hypotheses.org/411. A group of scholars who signed the Manifesto in 2013 tried to map out the geographical composition and linguistic diversity of the field. The results were summarized and commented by Pierre Mounier, one of the main organizers of the survey: http://bn.hypotheses.org/11179.

14. To deepen this issue, see Schmidt (“Inadequacy of Embedded Markup”); Fiormonte and Schmidt (“La rappresentazione digitale”); Fiormonte, Martiradonna, and Schmidt (“Digital Encoding”).

15. The link between the crisis of the humanities and the role of the DH is the central theme of 4Humanities, an advocacy initiative carried out by a group of universities, associations, as well as British, American, Canadian, and Australian research centers: http://4humanities.org/.

16. “On the other hand, our new global information structure is based on classification schemes elaborated within developed countries in order to solve problems particularly connected with the educated élite” (Bowker and Star, “Intervista su società,” 15).

17. “By far its most striking feature was its graphical user interface. . . . The arrangement of folders and icons built around what the Star engineers called the ‘desktop metaphor’ is so familiar today that it seems to have been part of computing forever” (Hilzik 1999, 364).

18. A complete list of such animations can be found in: http://wiki.secondlife.com/wiki/Internal_Animations#User-playable_animations.

19. According to the Australian Bureau of Statistics, the European invasion during the nineteenth and twentieth centuries eradicated both languages and cultures from the aboriginal populations: “Today, there are approximately 22 million Australians, speaking almost 400 languages, including Indigenous languages”; see http://www.abs.gov.au/ausstats/abs@.nsf/Latestproducts/1301.0Feature%20Article32009%E2%80%9310?opendocument&tabname=Summary&prodno=1301.0&issue=2009%9610&num=&view=.

20. This observation seems now confirmed by a study on language extinction drivers: “By contrast, the dominating effect of a single socioeconomic factor, GDP per capita, on speaker growth rate suggests that economic growth and globalization . . . are primary drivers of recent language speaker declines (mainly since the 1970s onwards), for instance, via associated political and educational developments and globalized socioeconomic dynamics” (Amano et al., 8).

21. For a discussion on the formalization of humanities disciplines, see Van Zundert et al.

22. The website is still active and available on online: http://www.hd.uib.no/AcoHum/.

23. It is important to notice that the author of this J’accuse is a computer scientist and mathematician currently engaged into biology researching. A specular historical-technical support to Longo’s thesis is to be found in the studies on the numerical origins of the cuneiform writing (see also Schmandt-Besserat, 1996).

24. I pointed out a list of possible intersections in Numerico, Fiormonte, and Tomasi, 102–3.

25. A loan from the French médiologie (http://www.mediologie.org/), this term spread also in Italy, see Pireddu and Serra, 2012. According to Régis Debray, mediology “deals with the analysis of the ‘superior social function’ (religion, ideology, art, politics) in their relationships with the transmission means and environments” (Debray).

26. Michael Wesch, Digital Ethnography at Kansas State University, http://mediatedcultures.net.

27. This expression, referred to DH, became immediately famous after W. Pannapacker used it to describe the 2009 Modern Language Association (MLA) Convention.

28. However, in the following years I continued to investigate many of the issues raised in this article. See, for example, my work from 2013–15: Fiormonte, “Seven Points on DH and Multiculturalism,” “Dreaming of Multiculturalism,” and “Towards Monoculturalism.”

29. Humanist Discussion Group, http://www.dhhumanist.org/cgi-bin/archive/archive_msg.cgi?file=/Humanist.vol26.txt&msgnum=696&start=98646&end=98998.

Bibliography

Amano, Tatsuya, Brody Sandel, Heidi Eager, Edouard Bulteau, Jens-Christian Svenning, Bo Dalsgaard, Carsten Rahbek, Richard G. Davies, and William J. Sutherland. “Global Distribution and Drivers of Language Extinction Risk.” Proceedings of the Royal Society B281 (September 3, 2014): 20141574: 1–10. http://dx.doi.org/10.1098/rspb.2014.1574.

Andersen, Peter Bøgh. A Theory of Computer Semiotics: Semiotic Approaches to Construction and Assessment of Computer Systems. Cambridge: Cambridge University Press, 1997.

Bottéro, Jean, Clarisse Herrenschmidt, and Jean-Pierre Vernant, eds. L’Orient et nous. L’écriture, la raison, les dieux. Paris: Albin-Michel, 1996.

Bourdieu, Pierre, and Roger Chartier. Le sociologue et l’historien. Paris: Editions Agone et Raison D’Agir, 2010. Italian trans. Il sociologo e lo storico. Dialogo sull’uomo e la società. Bari: Dedalo, 2011.

Bowker, Geoffrey, and Susan Leigh Star. “Intervista su società dell’informazione e disuguaglianze.” Daedalus 19 (2006): 13–20. http://www.ics.uci.edu/~gbowker/interview.pdf.

—. Sorting Things Out: Classification and Its Consequences. Cambridge, Mass.: MIT Press, 1999.

Buzzetti, Dino. “Digital Representation and the Text Model.” New Literary History 33 (2002): 61–88.

Canagarajah, Suresh A. A Geopolitics of Academic Writing. Pittsburgh: University of Pittsburgh Press, 2002.

Carey, James W. Communication as Culture: Essays on Media and Society. New York-London: Routledge, 1992.

Chun, Wendy H. K., and Lisa Marie Rhody. “Working the Digital Humanities: Uncovering Shadows between the Dark and the Light.”Differences: A Journal of Feminist Cultural Studies 25, no. 1 (2014): 1–26.

Debray, Régis. “Qu’est-ce que la médiologie?”Le Monde Diplomatique, August 1999. http://www.monde-diplomatique.fr/1999/08/DEBRAY/12314.

Derrida, Jacques. Archive Fever. A Freudian Impression. Chicago: University of Chicago Press, 1996.

De Smedt, Koenraad, Hazel Gardiner, Espen Ore, Tito Orlandi, Harold Short, Jacques Souillot, and William Vaughan, eds. Computing in Humanities Education: A European Perspective. Bergen: University of Bergen, HIT Centre, 1999. http://www.hd.uib.no/AcoHum/book/.

Fiormonte, Domenico. “Chi l’ha visto? Testo digitale, semiotica, rappresentazione. In margine a un trittico di Dino Buzzetti.” Informatica Umanistica 2 (2009): 21–46.

—. “Dreaming of Multiculturalism at DH2014.” InfoLet (blog), July 7, 2014. http://infolet.it/2014/07/07/dreaming-of-multiculturalism-at-dh2014/.

—. “Il dibattito internazionale sull’informatica umanistica: formazione, tecnologia e primato delle lingua.” Testo e Senso 4–5 (2002): 145–56. http://testoesenso.it/article/view/214.

—. “Il testo digitale: traduzione, codifica, modelli culturali.” In Italianisti in Spagna, ispanisti in Italia: la traduzione. Atti del Convegno Internazionale, ed. P. R. Piras, A. Alessandro, and D. Fiormonte, 271–84. Roma: Edizioni Q, 2008.

—. “Seven Points on DH and Multiculturalism.” InfoLet (blog), May 5, 2015. http://infolet.it/2013/05/05/seven-points-on-dh-and-multiculturalism/.

—. “Towards Monoculturalism in (digital) Humanities?” InfoLet (blog), July 12, 2015. http://infolet.it/2015/07/12/monocultural-humanities/.

Fiormonte, Domenico, Valentina Martiradonna, and Desmond Schmidt. “Digital Encoding as a Hermeneutic and Semiotic Act: The Case of Valerio Magrelli.” Digital Humanities Quarterly 4, no. 1 (2010). http://digitalhumanities.org/dhq/vol/4/1/000082/000082.html.

Fiormonte, Domenico, and Desmond Schmidt. “La rappresentazione digitale della varianza testuale.” In Canoni liquidi, ed. D. Fiormonte, 161–80. Napoli: ScriptaWeb, 2011.

Gold, Matthew K. “The Digital Humanities Moment.” In Debates in the Digital Humanities, ed. M. K. Gold, 9–16. Minneapolis: University of Minnesota Press, 2012. http://dhdebates.gc.cuny.edu/debates/text/2.

Halliday, Michael A. K. “Text as semantic choice in social contexts.” In Grammars and Descriptions, eds. Teun A. Van Dijk and János S. Petöfi, 176–226. Berlin: Walter De Gruyter, 1977. Reprinted in M. A. K. Halliday. Linguistic Studies of Text and Discourse: The Collected Works of M. A. K. Halliday. Vol. 2, 23–81. London; New York: Continuum, 2001.

Hess, Charlotte, and Elinor Ostrom. Understanding Knowledge as a Commons: From Theory to Practice. Cambridge, Mass.: MIT Press, 2011.

Hill, Richard. “The True Stakes of Internet Governance.” In State of Power 2015: An Annual Anthology on Global Power and Resistance, ed. N. Buxton and M. Bélanger Dumontier, 28–37. Amsterdam: Transnational Institute, 2015. http://www.tni.org/stateofpower2015.

Hilzik, Michael A. Dealers of Lightning. New York: HarperCollins, 1999.

Innis, Harold A. The Bias of Communication. Toronto: University of Toronto Press, 1951.

Kulesz, Octavio. Digital Publishing in Developing Countries. Paris: International Alliance of Independent Publishers/Prince Claus Fund for Culture and Development, 2011. http://alliance-lab.org/etude/?lang=en.

Liu, Alan. “Where Is Cultural Criticism in the Digital Humanities?” In Debates in the Digital Humanities, ed. Matthew K. Gold, 490–509. Minneapolis: University of Minnesota Press, 2012. http://dhdebates.gc.cuny.edu/debates/text/20.

Longo, Giuseppe. “Critique of Computational Reason in the Natural Sciences.” In Fundamental Concepts in Computer Science, eds. E. Gelenbe and J. P. Kahane, 43–69. London: Imperial College Press/World Scientific, 2009. ftp://ftp.di.ens.fr/pub/users/longo/PhilosophyAndCognition/CritiqCompReason-engl.pdf.

Millán, José Antonio. Internet y el español. Madrid: Retevision, 2001.

Numerico, Teresa, Domenico Fiormonte, and Francesca Tomasi. L’umanista digitale. Bologna: Il Mulino, 2010.

Perri, Antonio. “Al di là della tecnologia, la scrittura. Il caso Unicode.” Annali dell’Università degli Studi Suor Orsola Benincasa II (2009): 725–48.

Pireddu, Mario, and Marcello Serra, eds. Mediologia. Una disciplina attraverso i suoi classici. Napoli: Liguori, 2012.

Schmandt-Besserat, Denise. How Writing Came About. Austin: University of Texas Press, 1996.

Schmidt, Desmond. “The Inadequacy of Embedded Markup for Cultural Heritage Texts.” Literary and Linguistic Computing 25, no. 3 (2010): 337–56.

Slack, Jennifer Daryl, and John Macgregor Wise. Culture and Technology: A Primer. New York: Peter Lang, 2005.

Vygotsky, Lev Semënovič. Mind in Society: Development of Higher Psychological Processes. Cambridge, Mass.: Harvard University Press, 1978.

—. Thought and Language. Cambridge, Mass.: MIT Press, 1986.

Yoshiki, Mikami, and Shigeaki Kodama. “Measuring Linguistic Diversity on the Web.” In Net.Lang. Towards the Multilingual Cyberspace, ed. L. Vannini and H. Le Crosnier, 121–39. Caen: C&F Éditions, 2012.

Zundert, Joris Van, Smiljana Antonijevic, Anne Beaulieu, Karina Van Dalen-Oskam, Douwe Zeldenrust, and Tara Andrews. “Cultures of Formalisation: Towards an Encounter between Humanities and Computing.” In Understanding Digital Humanities, ed. D. M. Berry, 279–94. Basingstoke, Hampshire; New York: Palgrave Macmillan, 2012.

Next Chapter
36. How Not to Teach Digital Humanities | Ryan Cordell
PreviousNext
Copyright 2016 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at manifoldapp.org