A Telescope for the Mind?
WILLARD MCCARTY
As to those for whom to work hard, to begin and begin again, to attempt and be mistaken, to go back and rework everything from top to bottom, and still find reason to hesitate from one step to the next—as to those, in short, for whom to work in the midst of uncertainty and apprehension is tantamount to failure, all I can say is that clearly we are not from the same planet.
—Michel Foucault, History of Sexuality
The phrase in my title is Margaret Masterman’s; the question mark is mine. Writing in 1962 for Freeing the Mind, a series in the Times Literary Supplement,1 she used the phrase to suggest computing’s potential to transform our conception of the human world just as in the seventeenth century the optical telescope set in motion a fundamental rethink of our relation to the physical one. The question mark denotes my own and others’ anxious interrogation of research in the digital humanities for signs that her vision, or something like it, is being realized or that demonstrable progress has been made. This interrogation is actually nothing new; it began in the professional literature during the 1960s and then became a sporadic feature of our discourse that persists to this day. I will return to present worries shortly. First allow me to rehearse a few of its early expressions. Then, following the clues these yield, I will turn to the debate that I am not at all sure we are having but which, if we did, could translate the neurotic search for justification into questions worth asking. The debate I think we should be having is, to provoke it with a question, What is this machine of ours for? Or, to make it personal, What are we for?
“Analogy is an identity of relationships” (Weil, 85), not of things. Thus the computer could now be to the mind, Masterman was saying, as the telescope was to seventeenth-century observers, enlarging “the whole range of what its possessors could see and do [so] that, in the end, it was a factor in changing their whole picture of the world.” (“The Intellect’s New Eye,” 38) She suggests that by thus extending our perceptual scope and reach, computing does not simply bring formerly unknown things into view but also forces a crisis of understanding from which a new, more adequate cosmology arises. (I will return to this crisis later.) She was not alone in thinking that the computer would make a great difference to all fields of study, but she seems to have been one of the very few who argued for qualitative rather than quantitative change—different ideas rather than simply more evidence, obtained faster and more easily in greater abundance, to support ideas we already have in ways we already understand. Masterman was a linguist and philosopher; pioneer in computational linguistics; one-time student of Ludwig Wittgenstein; playwright and novelist; founder and director of the Cambridge Language Research Unit; adventurous and imaginative experimenter with computing, for example in composing haikus and arguing for the significance of such work against sometimes ferocious opposition; and part of a community of people genuinely, intelligently excited about the possibilities, however implausible, that the computer was then opening up before hype muddied the waters.2
Masterman begins her contribution to Freeing the Mind by distancing herself from her predecessors’ evident notion that the digital computer is “a purely menial tool”: “in fact …a kind of intellectual spade. This, it has been shown, can indeed assist a human scholar …by performing for him a series of irksome repetitive tasks …that the scholar, unaided, just cannot get through…. They take too long, they are backbreaking, they are eye-wearing, they strain too far human capacity for maintaining accuracy: in fact, they are both physically and intellectually crushing” (38). She had (can we have?) quite other ideas. Nevertheless the complaint pointed to a very real problem—that is, very real drudgery that at various times the demands of maritime navigation, the bureaucratic state, warfare, and scientific research inflicted on those who were professionally adept at calculation. Thus Gottfried Wilhelm Leibniz complained about enslavement to “dull but simple tasks” in the seventeenth century, Charles Babbage in the nineteenth, and Herman Goldstine in the twentieth (Goldstine, 8–12; Pratt, 20–44). All three responded by devising computational machinery. We certainly cannot and should not deny the crippling effects of the mathematical drudgery about which they all complained. But, Masterman insisted, these spadework uses, however welcome the time and effort they liberate, “provoke no new theoretic vision” (“The Intellect’s New Eye”, 38). Relief of others’ drudgery is a noble undertaking, but to slip from laudable service of that practical need to the notion that the computer is for drudgery is a profound error. It is an error that became an occupational hazard among early practitioners of humanities computing.
In 1978, literary scholar Susan Wittig paused to take stock of accomplishments in computing for her field. Quoting Masterman via an article promoting content analysis for literary study (Ellis and Favat), Wittig argued that Masterman’s call for more than spadework had come to naught. Although the computer “has added immeasurably to the ability of literary analysis to perform better and more efficiently the same tasks that they have performed for many years” (her emphasis), Wittig wrote, it has not “enlarged our range of vision or radically changed for us the shape of the universe of esthetic discourse” (211). The problem she identified was not the machinery; as Thomas Rommel has pointed out, the basic technical requirements for making a real difference had been met at least a decade before Wittig wrote (93). The problem she identified was the “limited conceptual framework” of the then dominant but ageing literary-critical theory, New Criticism, which along with structuralist-formalist grammar held, “first, the notion that the text is a linear entity; second, the idea that the text is a one-time, completed work, firmly confined to its graphic representation, the printed page; and third, the belief that the text is autonomously independent of any other entity, that it is meaningful in and of itself” (Wittig, 211–12). The force of these theoretical assumptions was to foreshorten the horizon of possibilities to what computers could then most easily do.
A dozen years earlier, literary scholar Louis Milic, also noting the great assistance provided to the old ways, had bemoaned the failing that Masterman indicated and that, we might say, lies behind the problem Wittig complained of: “Satisfaction with such limited objectives denotes a real shortage of imagination among us. We are still not thinking of the computer as anything but a myriad of clerks or assistants in one convenient console. Most of the results …could have been accomplished with the available means of half a century ago. We do not yet understand the true nature of the computer. And we have not yet begun to think in ways appropriate to the nature of this machine” (4). Fourteen years later, the situation had still not changed much. Summing up his experience and observations in research that had begun almost two decades earlier, Father Roberto Busa wrote with evident impatience (evincing the prevalence of the error) that the computer was not primarily a labor-saving device to be used to free scholars from drudgery but a means to illumine ignorance by provoking us to reconsider what we think we know (Busa, “The Annals of Humanities Computing”). Four years before that in “Why Can a Computer Do So Little?” he had surveyed the “explosion” of activities in “processing non-numerical, literary information” during the previous quarter century but noted the “rather poor performance” of computing as then conceived (1). Like Wittig and much like Jerome McGann at the beginning of the twenty-first century, Busa argued that this disappointing performance pointed to our ignorance of the focal subject—in this case, language: “what is in our mouth at every moment, the mysterious world of our words” (“Why Can a Computer Do So Little?,” 3). Back to the theoretical drawing board (which was by then already filling up with very different ideas).
Masterman’s vision of computing—her “telescope of the mind”—was not the only one nor the most ambitious. Best known is Herbert Simon’s and Allen Newell’s in 1958, phrased as a mixture of exuberant claims and startling predictions of what computers would, they said, be capable of doing within the following decade (Simon and Newell, “Heuristic Problem Solving”; cf. Simon and Newell, “Reply: Heuristic Problem Solving”). The gist of these Simon gave in a lecture in November of the previous year, preceding and following them with these confident statements as they appear in his lecture note:
IV. As of A.D. 1957 (even 1956) the essential steps have been taken to understand and simulate human judgmental heuristic activity. […] Put it bluntly (hard now to shock)—Machines think! Learn! Create!
V. What are the implications of this3
In Alchemy and Artificial Intelligence (1965) the philosopher Hubert Dreyfus famously took Simon and Newell to task for their pronouncements. But whatever our view of either, it is clear that by the mid-1960s signs of trouble for early visions were beginning to surface. The next year the Automatic Language Processing Advisory Committee of the U.S. National Research Council published Language and Machines: Computers in Translation and Linguistics (1966), a.k.a. the “black book” on machine translation, which effectively ended the lavish funding for the project (Wilks, Grammar, Meaning and the Machine Analysis of Language, 3–4). At the same time, however, the committee (much like Busa) recommended that efforts be redirected to research in the new field of computational linguistics “and should not be judged by any immediate or foreseeable contribution to practical translation” (ALPAC, 34). Machine translation was, they said, a research question, not a practical goal.
The like did not happen in the humanities, despite efforts such as John B. Smith’s, for example, in “Computer Criticism” (1978, the year Wittig measured current achievements against Masterman’s vision). More than ten years later Rosanne Potter, in her preface to a collection of papers that included a reprint of Smith’s “Computer Criticism,” wrote laconically that literary computing had “not been rejected, but rather neglected” by the profession (Literary Computing and Literary Criticism: Theoretical and Practical Essays on Theme and Rhetoric, xvi). Two years later, in her bibliographic survey of the first twenty-four years of Computers and the Humanities, she identified nine essays that, she wrote, “have attempted to reflect on what we are doing and why, where we are going and whether we want to go there” (“Statistical Analysis of Literature: A Retrospective on Computers and the Humanities, 1966–1990,” 402). All of them, she noted, “warn against the same danger, seduction away from what we want to do by what the computer can do, call for the same remedy, more theory to guide empirical studies, and end with perorations about moving from the easy (data gathering) to the more creative (building new, more complex conceptual models)” (“Statistical Analysis of Literature: A Retrospective on Computers and the Humanities, 1966–1990,” 402–3). She concluded that this was “as much self-reflection as the field was capable” (“Statistical Analysis of Literature: A Retrospective on Computers and the Humanities, 1966–1990,” 403). And now?
In August of that year the World Wide Web was released to the public; and, as many have noted, everything changed for computing in the humanities, though slowly at first. Also that year, Mark Olsen, presiding over the development of tools for one of the early large corpora, the Trésor de la Langue Française, at the American and French Research on the Treasury of the French Language project (ARTFL), shocked and even outraged many of those most closely involved with the field by arguing in an Modern Language Association (MLA) paper for what Franco Moretti has more recently called “distant reading.” A special issue of Computers and the Humanities, centered on a revised version of that paper, was published two years later (Computers and the Humanities 27.5–6). In it, Olsen sounded the familiar sentence: “Computer-aided literature studies have failed to have a significant impact on the field as a whole” (“Signs, Symbols and Discourses: A New Direction for Computer-Aided Literature Studies,” 309). Again, but as Yaacov Choueka said in somewhat different terms in 1988, “The tools are here, what about results?”4
So given the catalog of failings and disappointments that emerges from the complaints of practitioners, I ask the same question that architectural designer John Hamilton Frazer recently asked of once adventurous British computer art: “What went wrong?” (Brown et al., 50). This is not an idle question, for the digital humanities especially in regard of its strong tendency to define itself as serving client disciplines, which tend to initiate collaborations, set the agenda for the research and take academic credit for the result. As the popular metaphor of “text-mining,” the focus on large infrastructural projects, and the preoccupation with standards suggest, anticipation of service to be rendered moves the field toward an industrial model, in which curiosity-motivated research is subordinated to large-scale production, better to facilitate research that happens elsewhere by other means. Big Science is cited as a precedent without anyone asking about the historically documented and prominently attested consequences for the affected sciences. But to answer this historical question properly for the disciplines most affected—those for which interpretation of cultural artefacts is the central activity—would require more than any of the surveys of the last three or more decades. I am convinced, but cannot yet demonstrate, that an adequate historical account could be written and that a genuine history of the digital humanities in its first half century would greatly help us turn pitiful laments and dull facts into the stimulating questions we should be asking now. To write such an account, however, an historian would have to locate practitioners’ minority concerns within the broad cultural landscape of the time and then describe the complex pattern of confluence and divergence of numerous interrelated developments.5 These practitioners were not working in a vacuum; it is trivial to demonstrate that they were well aware of what was going on elsewhere. Why did they react (or not) as they did?
My intention here is much more modest. I want to talk about what we can do meanwhile, reflectively, to address our own predicaments beyond simply recognizing them.
A start may be made with the manner in which we now express our worries. No doubt in response to the demands for accountability from funding agencies, we have in recent years picked up the trendy phrase “evidence of value,” thus asking how we might prove that money has been well spent.6 We have, that is, shifted from the older argument for justification based on acceptance by our mainstream peers to a new one. What can we learn from it?
Roughly speaking the phrase “evidence of value” has migrated from legal disputes over property and the like to modern debates, for example, over the worth of public health care schemes (where it has become a buzzword and branded label). The question of value the phrase raises is a very old and persistent one that begins formally with ethics in the ancient world and continues today in philosophical arguments about whether affective states, such as feeling good or being excited about something, have anything to do with the value of that thing or whether a focus on evidence proves a dangerous trap. The eminently practical question of whether effort should continue to be spent in a particular way is sensible enough. There is nothing whatever wrong with it in the context of the purest, most wicked or curiosity-motivated research, for which you might say its constant presence is a necessary (though not sufficient) condition. But what do we accept as evidence for the worth or worthlessness of the effort, and who decides?
If funding agencies ask the question of whether research is worthwhile and judge the answer, then the effort is measured in funds spent, and evidence is defined as the “impact” of the research, in turn measured by citations to published work. For example, the rapporteur’s report for a recent event at Cambridge, “Evidence of Value: ICT in the Arts and Humanities,” begins thus: “With large sums of public money being channelled into this area, how is the ‘value’ of this investment assessed, what exactly are we assessing and for whom?”7 Argument for qualitative as much as quantitative evidence was made, but what qualitative evidence might be other than claims supported by anecdote isn’t clear. We can imagine a proper social scientific study of claimants’ claims—how, for example, computing has changed their whole way of thinking—but would the results, however numerically expressed, be persuasive? Is any measure of “impact” critically persuasive for the humanities? To push the matter deeper, or further, are we not being naive to think that measurement simply establishes how things are in the world? Thomas Kuhn put paid to that notion for physics quite a long time ago (1961, the year before Masterman’s visionary analogy).
In other words, it begins to look like the old philosophical argument, made by the consequentialists, carries the day: a preoccupation with evidence is mistaken; what matters, they say, are the consequences. We should ask, then, not where is the evidence of value. We should ask, instead, is computing fruitful for the humanities? What kinds of computing have been especially fruitful? In areas where it has not been, what’s the problem? How can we fix it?
There is, of course, the practical concern with how to continue the research that we do (I don’t ask whether) in the face of demands for evidence of value that often simply cannot be supplied without perverting it. If funding is contingent on providing this evidence, then the question becomes, what can we do without funding? If funding is cut anyhow, as it has been for the humanities in the UK, then only the possibility of compromise is removed. What kinds of work can be done under the circumstances in which we find ourselves? Here is a debate we should be having, but it is not the debate I regard as most insistent, since what we can do on our own (which is really what we’re left with primarily) is a matter for individual scholars to decide and find the cleverness to implement.
What lies beyond the let’s-get-on-with-it scenario (where “it” has become one’s own research made procedurally modest but as intellectually adventurous as can be) is the longer term question of how to improve the social circumstances of humanistic research. The question was debated briefly on the Humanist listserv from late October to early December 2010.8 Here I return to a remark I reported there from the current UK science minister, David Willetts. Justifying the protected funding for the sciences, he noted that “the scientific community has assembled very powerful evidence such as in that Royal Society report, The Scientific Century, about what the benefits are for scientific research. Now you can argue that it’s all worthwhile in its own rights, but the fact that it clearly contributes to the performance of the economy and the well-being of citizens—that’s really strong evidence, and we deployed it.”9 Arguing for economic benefits is a long reach for the humanities, but “the well-being of citizens” is not. What can the digital humanities do for the humanities as a whole that helps these disciplines improve the well-being of us all?
And so I come to the debate I think we should be having.
We who have been working in the field know that the digital humanities can provide better resources for scholarship and better access to them. We know that in the process of designing and constructing these resources our collaborators often undergo significant growth in their understanding of digital tools and methods and that this sometimes, perhaps even in a significant majority of cases, fosters insight into the originating scholarly questions. Sometimes secular metanoia is not too strong a term to describe the experience. All this has for decades been the experience of those who guided collaborating scholars or were guided as scholars themselves through a gradual questioning of the original provocation to research, seeing it change as the struggle to render it computationally tractable progressed. In a sense, there is nothing new here to anyone who has ever attempted to get to the bottom of anything complex and ended up with, as Busa said, a mystery, something tacit, something that escapes the net. So not only is evidence of value to our collaborating colleagues thick on the ground, but it is also to be expected as a normal part of scholarship. But what about the argument? By definition evidence is information that backs up an argument. In other words, no argument, no evidence, only raw, uncommitted information.
The problem we have and must debate, then, is the argument or set of arguments that will convert decades of experience into (I believe, from a quarter century of it) incontrovertible evidence of intellectual value. We’ve seen and, I hope, are by now convinced that all computing in the humanities is not for drudgery even as it becomes more and more difficult, through ever-multiplying layers of software powered by ever-better hardware, to see what goes on behind the friendly service our devices provide. Some computing is designed to relieve us of drudgery. But to go back to Turing’s scheme for indefinitely many forms of computing, whose number is limited only by the human imagination, what is computing in and of the humanities for? Are we for drudgery? If not, with regards to the humanities, what are we for?
NOTES
1. Freeing the Mind was first published as a series of essays in the Times Literary Supplement from March 23 to May 4, 1962, then republished as a slim volume together with selected letters to the editor later that year. It provides an excellent snapshot of nontechnical reflection on and about computing, as was characteristic of the Times Literary Supplement during the 1960s and 1970s.
2. As Yorick Wilks says in his biographical tribute to her, Masterman was “ahead of her time by some twenty years …never able to lay adequate claim to [ideas now in the common stock of artificial intelligence and machine translation] because they were unacceptable when she published them,” making efforts “to tackle fundamental problems with computers …that had the capacity of a modern digital wristwatch,” producing and inspiring numerous publications that today seem “curiously modern” (Wilks, Language, Cohesion and Form, 1, 4). For her work with haiku, see Masterman and McKinnon Wood, and Masterman “Computerized Haiku”; for vitriolic opposition to it see Leavis. For an idea of the diverse company with which her work associated her, see the table of contents in Reichardt’s Cybernetics, Art and Ideas. Art critic Jasia Reichardt was responsible for the landmark Cybernetic Serendipity exhibition in London, August to October, 1968 (Reichardt, Cybernetic Serendipity). Among the exhibitors was “mechanic philosopher” and inventor of visionary “maverick machines” Gordon Pask, who was a long-time friend and research partner of Robert McKinnon Wood, Masterman’s colleague at Cambridge; for more on Pask, see Bird and Di Paolo.
3. An image of the original manuscript upon which this transcription was based may be found at http://www.mccarty.org.uk/essays/McCarty,%20Telescope.pdf.
4. At the 1988 Association for Literary and Linguistic Computing Conference in Jerusalem, Choueka assigned me to the panel “Literary and Linguistic Computing: The Tools Are Here, What about Results?” The title was his. See http.sigir.org/sigirlist/issues/1988/88-4-28.
5. My historiography owes a great deal to the late Michael S. Mahoney; see the collection of his papers and the editor Thomas Haigh’s discussion in Mahoney; cf. McCarty.
6. For “evidence of value” in the digital humanities, see subsequent sections in this chapter and www.crassh.cam.ac.uk/events/196/. The AHRC ICT Methods Network, under which “evidence of value” was the subject of an expert seminar, has concluded its work. Otherwise, a search of the web will turn up thousands of examples of its use in other contexts.
7. Wilson; see also www.crassh.cam.ac.uk/events/196/, and Hughes.
8. See Humanist 24.427–8, 431, 436 (http://www.digitalhumanities.org/humanist/, with reference to a British Academy lecture by Martha Nussbaum), 440, 445, 448, 453, 455, 464, 469, 479, 481, 483, 485, 504, 511, 515, 527, 541. As is typical with online discussions, a particular thread remains distinct for a time then begins to unravel into related matters. This one remained coherent for quite some time.
9. “The Material World,” BBC Radio 4, October 21, 2010, my transcription. For the Royal Society report, see royalsociety.org/the-scientific-century/.
BIBLIOGRAPHY
ALPAC. Language and Machines: Computers in Translation and Linguistics. Report by the Automatic Language Processing Advisory Committee, National Academy of Sciences. Publication 1416. Washington, D.C.: National Academy of Sciences, 1966.
Bird, Jon, and Ezequiel Di Paolo. “Gordon Pask and His Maverick Machines.” In The Mechanical Mind in History, edited by Philip Husbands, Owen Holland, and Michael Wheeler, 185–211. Cambridge, Mass.: Bradford Books, 2008.
Brown, Paul, Charlie Gere, Nicholas Lambert, and Catherine Mason, eds. White Heat Cold Logic: British Computer Art 1960–1980. Cambridge, Mass.: MIT Press, 2008.
Busa, R. “The Annals of Humanities Computing: The Index Thomisticus.” Computers and the Humanities 14 (1980): 83–90.
———. “Guest Editorial: Why Can a Computer Do So Little?” Bulletin of the Association for Literary and Linguistic Computing 4, no. 1 (1976): 1–3.
Dreyfus, Hubert L. Alchemy and Artificial Intelligence. Rand Corporation Papers, P-3244. Santa Monica, Calif.: RAND Corporation, 1965.
Ellis, Allan B., and F. André Favat. “From Computer to Criticism: An Application of Automatic Content Analysis to the Study of Literature.” In The General Inquirer: A Computer Approach to Content Analysis, edited by Philip J. Stone, Dexter C. Dumphy, Marshall S. Smith, and Daniel M. Ogilvie. Cambridge, Mass.: MIT Press, 1966. Reprint, in Science in Literature: New Lenses for Criticism, edited by Edward M. Jennings, 125–37. Garden City, N.Y.: Doubleday, 1970.
Foucault, Michel. The Use of Pleasure. The History of Sexuality 2. Translated by Robert Hurley. London: Penguin, 1992/1984.
Goldstine, Herman H. The Computer from Pascal to von Neumann. Princeton, N.J.: Princeton University Press, 1972.
Hughes, Lorna, ed. The AHRC ICT Methods Network. London: Centre for Computing in the Humanities, King’s College London, 2008.
Kuhn, Thomas S. “The Function of Measurement in Modern Physical Science.” Isis 52, no. 2 (1961): 161–93.
Leavis, F. R. “‘Literarism’ versus ‘Scientism’: The Misconception and the Menace.” Times Literary Supplement (April 23, 1970): 441–45. Reprint, 1972 in Nor Shall My Sword: Discourses on Pluralism, Compassion and Social Hope, 137–60. London: Chatto & Windus, 1972.
Mahoney, Michael S. Histories of Computing, edited by Thomas Haigh. Cambridge, Mass.: Harvard University Press, 2011.
Masterman, Margaret. “Computerized Haiku.” In Cybernetics, Art and Ideas, edited by Jasia Reichardt. London: Studio Vista, 1971: 175–83.
———. “The Intellect’s New Eye.” Times Literary Supplement 284 (April 17, 1962). Reprint, in Freeing the Mind: Articles and Letters from The Times Literary Supplement during March-June, 1962, 38–44. London: Times, 1962.
———. Language, Cohesion and Form. Edited by Yorick Wilks. Cambridge, UK: Cambridge University Press, 2005.
———. “The Use of Computers to Make Semantic Toy Models of Language.” Times Literary Supplement (August 6, 1964): 690–91.
Masterman, Margaret, and Robin McKinnon Wood. “The Poet and the Computer.” Times Literary Supplement (June 18, 1970): 667–68.
McCarty, Willard. “Foreword.” In Language Technology for Cultural Heritage: Selected Papers from the LaTeCH Workshop Series, edited by Caroline Sporleder, Antal van den Bosch, and Kalliopi A. Zervanou, vi–xiv. Lecture Notes in Artificial Intelligence. Berlin: Springer Verlag, 2011.
Milic, Louis. “The Next Step.” Computers and the Humanities 1, no. 1 (1966): 3–6.
Moretti, Franco. “Conjectures on World Literature.” New Left Review 1 (2000): 54–68.
Olsen, Mark. “Signs, Symbols and Discourses: A New Direction for Computer-Aided Literature Studies.” Computers and the Humanities 27 (1993): 309–14.
———. “What Can and Cannot Be Done with Electronic Text in Historical and Literary Research.” Paper for the Modern Language Association of America Annual Meeting, San Francisco, December 1991.
Potter, Rosanne, ed. Literary Computing and Literary Criticism: Theoretical and Practical Essays on Theme and Rhetoric. Philadelphia, Pa.: University of Pennsylvania Press, 1989.
———. “Statistical Analysis of Literature: A Retrospective on Computers and the Humanities, 1966–1990.” Computers and the Humanities 25 (1991): 401–29.
Pratt, Vernon. Thinking Machines: The Evolution of Artificial Intelligence. Oxford: Basil Blackwell, 1987.
Reichardt, Jasia, ed. Cybernetic Serendipity. New York: Frederick A. Praeger, 1969.
———. Cybernetics, Art and Ideas. London: Studio Vista, 1971.
Rommel, Thomas. “Literary Studies.” In A Companion to Digital Humanities, edited by Susan Schreibman, Ray Siemens, and John Unsworth, 88–96. Oxford: Blackwell, 2004.
Simon, Herbert A., and Allen Newell. “Heuristic Problem Solving: The Next Advance in Operations Research.” Operations Research 6, no. 1 (1958): 1–10.
———. “Reply: Heuristic Problem Solving.” Operations Research 6, no. 3 (1958): 449–50.
Smith, John B. “Computer Criticism.” Style 12 (1978): 326–56. Reprint, in Potter (1989): 13–44.
Weil, Simone. Lectures on Philosophy. Translated by Hugh Price. Cambridge, UK: Cambridge University Press, 1978/1959.
Wilks, Yorick Alexander. “Editor’s Introduction.” In Margaret Masterman, Language, Cohesion and Form. Edited by Yorick Wilks. Cambridge, UK: Cambridge University Press, 2005: 1–17.
———. Grammar, Meaning, and the Machine Analysis of Language. London: Routledge & Kegan Paul, 1972.
Wilson, Lee. “Evidence of Value: ICT in the Arts and Humanities. Rapporteur’s Report.” http://www.ahrcict.rdg.ac.uk/news/evidence%20of%20value%20v2.pdf.
Wittig, Susan. “The Computer and the Concept of Text.” Computers and the Humanities 11 (1978): 211–15.