Skip to main content

Computational Humanities: Two Volumes

Computational Humanities
Two Volumes
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeComputational Humanities
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Introduction. What Gets Counted: Computational Humanities under Revision | Lauren Tilton, David Mimno, and Jessica Marie Johnson
  6. Part I. Asking With
    1. 1. Computation and Hermeneutics: Why We Still Need Interpretation to Be by (Computational) Humanists | Hannah Ringler
    2. 2. Computing Criticism: Humanities Concepts and Digital Methods | Mark Algee-Hewitt
    3. 3. Born Literary Natural Language Processing | David Bamman
    4. 4. Computational Parallax as Humanistic Inquiry | Crystal Hall
    5. 5. Manufacturing Visual Continuity: Generative Methods in the Digital Humanities | Fabian Offert and Peter Bell
    6. 6. Maps as Data | Katherine McDonough
    7. 7. Fugitivities and Futures: Black Studies in the Digital Era | Crystal Nicole Eddins
  7. Part II. Asking About
    1. 8. Double and Triple Binds: The Barriers to Computational Ethnic Studies | Roopika Risam
    2. 9. Two Volumes: The Lessons of Time on the Cross | Benjamin M. Schmidt
    3. 10. Why Does Digital History Need Diachronic Semantic Search? | Barbara McGillivray, Federico Nanni, and Kaspar Beelen
    4. 11. Freedom on the Move and Ethical Challenges in the Digital History of Slavery | Vanessa M. Holden and Joshua D. Rothman
    5. 12. Of Coding and Quality: A Tale about Computational Humanities | Julia Damerow, Abraham Gibson, and Manfred D. Laubichler
    6. 13. The Future of Digital Humanities Research: Alone You May Go Faster, but Together You’ll Get Further | Marieke van Erp, Barbara McGillivray, and Tobias Blanke
    7. 14. Voices from the Server Room: Humanists in High-Performance Computing | Quinn Dombrowski, Tassie Gniady, David Kloster, Megan Meredith-Lobay, Jeffrey Tharsen, and Lee Zickel
    8. 15. A Technology of the Vernacular: Re-centering Innovation within the Humanities | Lisa Tagliaferri
  8. Acknowledgments
  9. Contributors

Chapter 9

Two Volumes

The Lessons of Time on the Cross

Benjamin M. Schmidt

The Backwaters of American History

This chapter describes two parallel approaches to doing something that could be called computational history that have evolved in the last fifty years in the United States. These approaches deploy different visions of bringing computation to the human past and suggest different paths forward for computational humanities more broadly. The first, which usually calls itself “digital history” proper (a term often distinct from “digital humanities”), I argue, has developed a deeply nonscientific set of uses for computation—practices centered in public history, in exploratory analysis, and in new media. At the same time, there is a social-scientific project of history in the United States that makes use of scientific computation but happens with almost no institutional connection to the “digital humanities” or even to history departments themselves.

Many debates about digital humanities proceed from the premise that to make the humanities more computational means to make them more scientific. In Western European humanities departments, in the old tradition of “humanities computing” and in the recent push to rebrand a subset of the field as “cultural analytics,” questions of scientism, calculation, and certainty play a major role.1 In English literature, the defining feature in the contemporary digital humanities is how contemporary quantitative techniques are “blurring the traditionally sharp boundary that separated us from the quantitative social sciences” (English and Underwood, “Shifting Scales”). Advocates of “consilience” (Gottschall, “Afterword,” 217) eagerly cite literary digital humanists as a step toward the (re)unification of the sciences and humanities; opponents of computational humanities criticize it for imperfectly mimicking the conventions of scientific statistical inference without generating any new meanings of its own (Da, “Computational Case”).

The split in American digital history shows a way to make computation focus not on making the humanities more scientific but instead making them more creative. This is a sorely needed example.

The reasons for this lie in the particular course of interactions between history and the social sciences from about 1965 to 1985. At its heart—as with so much else distinct about the United States—is slavery. An outsize place in the whole field has been a single book, Robert Fogel and Stanley Engerman’s 1974 Time on the Cross, which emerged as the topic of both enormous acclaim and criticism in the mid-1970s. Digital historians and historical social scientists have each evolved their own, strikingly different stories of that book’s successes and failures. It is important to properly understand that controversy—and the closely related, if broader, question of how American history would grapple with its country’s central trauma—to think about what it means to apply computational methods to the most central areas of social life.

Some readers may be inclined to take this story simply as a curio; an apologia for how the energy of American historians was diverted into an odd channel. I hope they won’t. Although I intend to tell a myopically American story here (up to and including consistently using “American” as an adjective to describe only the United States of America), the work done by Americans in digital history gives a different charge to the question of computational humanities as a whole. While I give a descriptive account of how American digital history came, for the most part, not to be social scientific, I think that the channels into which it was re-rerouted are well worth our time. It is, in short, that computational humanities need not be quantitative humanities and certainly need not be scientific humanities.

Accounting for Slavery in History

Digital history of some form has been practiced for nearly a hundred years. While digital humanists still often treat Roberto Busa’s work on Thomas Aquinas as the earliest digital research in the humanities, the use of digital storage and calculating machines for historical work goes back even further—at least to 1933, when William Murray published a comparative study of farm mortgages in Story County, Iowa, from 1854 to 1931 using 25,993 punched cards (Figure 9.1) (Murray, “Economic Analysis”). Writing in 1970, Robert P. Swierenga (“Clio and Computers”) pointed out that leading historians including Bernard Bailyn and Merle Curti performed computational studies with punch cards in the 1950s (in both cases, under the guidance of wives who “had trained in disciplines more methodologically rigorous than history”).2

Punch card with handwritten, invented data from Unknown Insurance Co. for John Doe, with black circles indicating holes punched in the card.

Figure 9.1. An abstracted punch card from William Murray’s 1933 study of farm mortgages.

Through the 1950s and 1960s, history was often classified as a social science in American universities. While this left scholars like Curti and Bailyn free to explore emerging technologies, the idea that history might become a science of measurement drummed up organized opposition as well. In 1962, Carl Bridenbaugh, the president of the American Historical Association, notoriously described quantification as a “bitch-goddess,” heretical to historical practice. The slur—adapted from a reference of William James’s to the “the bitch-goddess ‘success’”—is so breathtaking in how concisely it deploys tropes of sexism and orientalism together that it has somewhat overshadowed the core of Bridenbaugh’s message; that the social sciences are dehumanizing because they lose sight of individual experience.

The finest historians will not be those who succumb to the dehumanizing methods of social sciences, whatever their uses and values, which I hasten to acknowledge. Nor will the historian worship at the shrine of that Bitch-goddess, QUANTIFICATION. History offers radically different values and methods. It concerns itself with the “mutable, rank-scented many,” but it fails if it does not show them as individuals whenever it can. (Bridenbaugh, “Presidential Address”)

These struggles came to a head in the 1970s, and the central area of contestation was around the field of cliometrics. While cliometrics was never really part of the American historical profession proper, it precipitated conflicts that had riled the profession. The term—apparently first used in economics seminars at Purdue in late 1960 (Goldin, “Cliometrics and the Nobel”)—initially indicated a project of bringing econometric methods to questions in economic history. In his 1972 survey of the field, Joel Silbey reported that “more and more” historians were joining the ranks of the cliometricians while acknowledging that it was dominated by economists; quantitative work on political history, likewise, was led by political science departments, leaving social history the major area of research in history (Silbey, “Clio and Computers”).

The outstanding figure in the field was Robert Fogel, who had been involved in the cliometric project from the beginning. The publication of Time on the Cross (hereafter TotC) marked its apotheosis, a moment when the historical discipline began to distance itself more decisively from interdisciplinary dalliances. By laying out a grand economic history of slavery that purported to measure its effects for the first time, TotC sought to bring one of the central areas of American history into the econometric fold. While the book was met with initial acclaim, concern about its methods and its approach quickly boiled over. Thomas Haskell’s devastating 1975 New York Review of Books summation outlines the shortcomings in the book; the labor historian Herbert Gutman managed to turn around a book-length critique in under a year (Gutman, Slavery and the Numbers Game).

Insofar as digital history tells its own story, it has explicitly distanced itself from TotC and the cliometric project generally. One guide to computing for historians, written in the 1990s, warned that the authors were “not champions of ‘cliometrics,’ ‘quantification,’ the ‘new’ history, ‘scientific history,’ or even what is called ‘social science history’” (Mawdsley, Computing for Historians, referenced in Thomas, “Computing and the Historical Imagination”).

American historians often treat cliometrics as a failure—in Edward Ayers’s words, “a brief love affair with quantification” (Parry, “Quantitative History Makes a Comeback”)—that presents something of a dead end to be avoided. Historians who learned their craft in the 1970s and 1980s have raised to me again and again a single data point that emerges in almost every account of TotC; namely, Fogel and Engerman’s calculation that most slaves were whipped 0.7 times a year, and the attendant implication that 0.7 is not especially high. Herbert Gutman cast the same numbers in a different form: the plantation was traumatized by a public display of torture more than once a week.

The reception of TotC has focused on its myopia. Jessica Marie Johnson published an essay in 2018, “Markup Bodies,” that gives a good account of the position that I find most important for students in the humanities to completely internalize, because it articulates two of the most important ways that historians have developed a language for talking about statistics.

Statistics on their own, enticing in their seeming neutrality, failed to address or unpack black life hidden behind the archetypes, caricatures, and nameless numbered registers of human property slave owners had left behind. And cliometricians failed to remove emotion from the discussion. Data without an accompanying humanistic analysis—an exploration of the world of the enslaved from their own perspective—served to further obscure the social and political realities of black diasporic life under slavery. (Johnson, “Markup Bodies”)

The argument is not just that data fails to capture experience—as Bridenbaugh argued in 1965—but that the existence of data is itself part of the record of violence. As Johnson puts it: “Data is the evidence of terror, and the idea of data as fundamental and objective information, as Fogel and Engerman found, obscures rather than reveals the scene of the crime.”

The lesson that I, myself, was taught in graduate school in the late 2000s—before I had ever heard of “digital humanities”—was similar. An attention to statistical reductionism rather than human experience struck in the face of historical practice but also served to ratify past crimes. To act statistically on individual subjects violates a kind of taboo. To this day, I am reluctant to make data visualizations in which the points are individual people. In my own work—and in my advice to any other digital historians—I generally think that human beings are among the least promising topics of statistical analysis; I choose to visualize ships, books, and land but try to avoid visualizing the person whenever possible. After all, people are the things we need data to understand the least. In a graduate class with the political historian Sean Wilentz, I remember vividly being walked through a slate of regression analyses of ethnic voting patterns in pre–Civil War counties in the United States and feeling that I was learning something rather important about voting patterns, then being informed that none of these methods—not one!—can tell us a single thing about why a single person voted the way they did.

In this telling, cliometrics left quantification as a still-radioactive site in the discipline, smoking from the wars of the 1970s, where you tread at your peril. Digital history turned again and again to slavery in the decades since—both in work like Johnson’s and the burgeoning field of Black digital humanities (Gallon, “Making a Case for the Black Digital Humanities”). But their work did not build on TotC in any important way. Even rare exceptions like the Trans-Atlantic Slave Trade database—led by Engerman’s student David Eltis—sat at the margins of the American profession, built largely by researchers in European centers and with much of its administrative apparatus in Hull, in the United Kingdom, until the mid-2000s.

The Split between Two Volumes

But cliometrics did not either go extinct or go into hibernation; instead, under fire, it retreated back into the social sciences whence it came. To see what cliometrics is, and how it differs from digital history, we need only look at the quite different reception of Fogel and Engerman’s work in the historical social sciences.

I have occasionally taught TotC in graduate history classes, and two things uniformly surprise. One is that the work, which most students see as catastrophically flawed, managed to earn both the Bancroft Prize—the top award in the American historical profession—and played a major part in Fogel’s Nobel Prize for Economics in 1993. The other was that students would order the book from Amazon—there are still many cheap copies of TotC out there—and occasionally would accidentally end up with a copy not of the narrative but of the second volume that gives the methodological apparatus for the book. There are many problems with TotC, and many are much more important than this. But for me, the most interesting has to do with the thinking that made this a reasonable arrangement: How did the argument and the evidence come to be so heavily separated from each other?

These two problems are related. TotC ratified a split between the “humanities” history and social science history—especially economics—that mirrors the split of the book itself into two volumes. While historians saw the split as part of a sleight of hand—a “silk purse of scientific exactitude,” Thomas Haskell put it, separated from “the sow’s ear” whence it came—economists were learning an entirely different set of lessons.

As the history of capitalism turns to slavery in the past few years, particularly with controversies around books by Sven Beckert and Ed Baptist, we have seen economists look with bemusement at historians as the species that failed to learn the lessons of TotC. This is a set of debates largely parallel to digital history, proper, but one crucial to understanding the future of digital history, writ small.

In a fascinating review essay, the economist Eric Hilt points to the existence of an extensive literature about slavery, worrying that historians “do not seem to have taken seriously the debates among economic historians that followed the publication of that book.”

More importantly, the lack of engagement with economic historians limited the analytical perspectives of each of these books. Most of them seem aware of Fogel and Engerman’s Time on the Cross (1974), and some repeat its arguments about the profitability of slavery or the efficiency of slave plantations. But they do not seem to have taken seriously the debates among economic historians that followed the publication of that book. Some . . . challenged Fogel and Engerman [but] analyzed slavery in new ways. (Hilt, “Economic History”)

Hilt inclines toward what many historians may find a surprising account: that historians are even recapitulating the core ideological mistake of Fogel and Engerman by cloaking the work of enslaved people in a bizarrely emancipatory rhetoric, as if developing new techniques for picking cotton quickly is an accomplishment anyone should be eager to claim.

While Hilt writes respectfully about historical work, the economist Alan Olmstead has offered a more unforgiving take. The problem is not just that historians have not read the economics literature, but that they could not understand it even if they did:

In the past, historians and economists (sometimes working as a team) collectively advanced the understanding of slavery, southern development, and capitalism. There was a stimulating dialog. That intellectual exchange deteriorated in part because some economists produced increasingly technical work that was sometimes beyond the comprehension of many historians. Some historians were offended by some economists who overly flaunted their findings and methodologies. (Olmstead and Rhode, “Cotton, Slavery, and the New History of Capitalism”; italics added)

Still, in the blame that Olmstead metes out, historians’ obtuseness clearly bears the brunt of it; but past the insults lies a genuine wish to be able to have a stimulating dialogue with historians in general (although not, perhaps, any actual existing historians).

Should We Just Hope for Consilience?

One obvious takeaway is that we simply need to get these fields talking to each other again and establish interdisciplinary synergy. As I will explain in a short while, I don’t fully agree: but it is a claim worth considering.

The economist Trevon Logan gave a good synopsis of this account in a series of tweets in 2018 describing how he teaches TotC and the economics of slavery today. His story begins with the 1975 criticism of TotC but moves in a different direction. While historians were avoiding the radioactive spot, economists donned their hazmat suits and wandered in. While historians view TotC as the end of a process, economists see Fogel’s later, better work, including Without Consent or Contract, as the real material. As Logan puts it: “F&E [Fogel and Engerman] regroup, and they cut their losses, and they go back to the beginning. They go back to the productivity calculation. . . . F&E win on prices, output, production constraints, insurance, crop mix, etc. It’s a battle of the force and in the end their original calculation is established and likely accepted by the majority of the field.”3 Efficiency is vindicated—but “with a great cost,” that economic historians are excessively concerned with efficiency metrics, have an unsophisticated understanding of race in America, and thus fail to reckon with slavery’s actually meaning.

Logan’s account of the present-day state of economics makes a strong case for the need for economists to attend to issues of race, power, and history more seriously than they have, in a way that draws on interdisciplinary connections. This situation might seem to leave major opportunities for interdisciplinary reconciliation. The case of literary studies in the Americas provides a further example of a field that has managed a heavily quantitative turn in the last few years. Work in digital literary studies has taken on a much more aggressive turn toward measurement; flagship journals regularly publish work using topic modeling or word embeddings, and new journals like Cultural Analytics lean most heavily on an English-department core rather than history.

We have seen a firmer set of turns lately toward an insistence on argumentation as the centerpiece of digital history. Cameron Blevins argued in an earlier volume of Debates in the Digital Humanities that digital history suffered from a “perpetual future tense” in which practitioners wrote about what they could do rather than advancing their scholarly fields. Since then, several initiatives out of George Mason University, which was one of the leaders of the old form of digital humanities, launched its new journal devoted to argumentation in history. It might seem reasonable to position this new manifestation of digital history as the place where cliometric flaunting could be tempered by disciplinary knowledge. Historian Lincoln Mullen at George Mason has—as in an American Historical Review article with Kellen Funk on legal text reuse—begun to bring some cutting-edge computational methods to mainstream historians (Funk and Mullen, “The Spine of American Law”).

In some areas—like the history of capitalism—a greater degree of numerical facility is clearly necessary. But the existence of a large field pushing back against historians attempting digital work shows that the success of social science methods in English and literature is actually the exception that proves the rule. Even back in the 2000s, during the study of humanities computing, it was already the case that English had comparatively more number-crunching and that history had many more public-facing websites in its digital portfolio. That trend has continued. And the reason for that, in part, is that there is considerably less low-hanging fruit in terms of data-driven argumentation than in literary study. In literary history, there are huge arrays of digitized data and an array of fairly amateurish informaticians, physicists, and computer scientists who desperately need advice to deal responsibly with literary matters. In the United States, computational methods manipulated by historians have little promise because they are already being used at a much higher level among social scientists than any historian can possibly pick up in a graduate program.

In the history of politics and the economy, the work of creating and analyzing cultural data has been continually underway for half a century. For economic historians, in short, the legacy of cliometrics is not one of failure As Logan and Olmstead each suggest in their way, it is, rather, one of success that has not involved historians. In questions of historical interest, the post-TotC split has given generations of economists, sociologists, and political scientists methodologically training and substantive knowledge about the past in discourses increasingly remote from the conversations of historians. And because of the closed-off nature of the way that these fields have tended to handle their data, the resources they create exist as data crafted for purposes much closer to social science experimentation than to humanistic narration.

The best work in the cliometric tradition followed in the path of social history. For several years, I regularly paired a text alongside TotC on graduate syllabi: Steven Ruggles’s work on the changing American family structure. Students eventually persuaded me that it was too transparently an attempt to balance the “good” cliometrics against the bad. But while Ruggles remains a tremendously important historian, the work at the Minnesota Population Center that he leads has hewed closer and closer to the sociological mainstream than the historical one. If “computational history” really existed in America, it would need a conference; and the first challenge it would face would be justifying its existence in the face of the voluminous and sophisticated work presented each year at the Social Science History Association (SSHA), which has never bothered to brand itself as “digital humanities.” Before we try to reboot computational history, we should look and see what has happened in the shadow of this divide.

The Ways Disciplines Talk about Data

Take as an example two papers from opposite sides of the disciplinary divide: a work of social science history by economists and a data-driven economic history written for American historians in the field’s flagship journal.

The first is a working paper by three economists—Elliott Ash, Daniel Chen, and Suresh Naidu—from 2017. While the authors are economists, the actual contribution—summed up in a title that few historians would think debatable, “Ideas Have Consequences”—is about legal or intellectual history. It presents a powerful and discrete account of the transmission of ideas across social networks through textual analysis. The substance argues that privately funded Manne seminars in law and economics—which were attended by a substantial proportion of the federal judiciary—affected the language, decisions, and sentencing of federal justices who attended them and thus, by implication, allowed large-value conservative donors to capture the federal judiciary. The effect seems robust to a variety of covariates, and even more interestingly, they claim to detect an effect where simply being randomly impaneled with a judge who attended one of these seminars makes that second judge harsher in their future sentencing decisions.

Reading this paper was exciting, but looking through the tools and tricks and sources also made me feel like someone in a science fiction movie encountering an artifact sent back from a few decades in the future. The extraordinary quality of data that economists can obtain is almost unimaginable to humanists. It is not just a million or so circuit court votes and 300,000 opinions but also the institutional capacity to file Freedom of Information Act (FOIA) requests to get the exact years of attendance for every judge who went to the Manne program and the disciplinary capacity to casually use relatively new methods like word embeddings without spending pages slowly, gently analogizing them to some “simpler” concept. Humanists wandering through algorithms seem to have to justify using an algorithm by first identifying which Borges short story—whether about the Map of the Empire, the analytical language of John Wilkins, or Pierre Menard and the Quijote—it most closely resembles.

On the other hand, consider the 2019 digital article published in the American Historical Review, an analysis of bank records for mid-nineteenth-century Irish immigrants in New York City by historian Taylor Anbinder and colleagues. The authorship team is interdisciplinary enough that you could reasonably claim this piece (Anbinder, Ó Gráda, and Wegge, “Networks and Opportunities”) represents cliometrics returned to the highest place in the historical profession. One of the authors, Cormac Ó Gráda, was even a fellow of the Cliometrics Society in 2016.

While the piece uses data, it in no way represents the type of data science that historical computation might seem to need. Consider some of the words that never appear in it: “model,” “statistic,” “regress,” “counterfactual,” “predict,” “coefficient.” Even the word “network,” which is in the title and conceptually at the heart of the piece, is not a formal term: the authors explicitly disavow any engagement whatsoever with technical definitions of network science or the tools of social network analysis. The only software mentioned is Microsoft Excel, not Python or R. Although the article is fundamentally about data, it has almost no statistics to speak of: the entire statistical apparatus is a single correlation coefficient.

But while the methodology is simple, both the analysis and the supporting apparatus branch in many more directions. The core contribution, rather, is an elaborate and impressive apparatus for browsing the deep connections between census records and bank records created (at great expense) by the project.4 Anbinder and colleagues carefully link the individual lives of an immigrant community together across two continents and find new ways to make those stories legible that work natively on the internet without requiring archival research or statistical presentation. This foregrounding of data points and the human experience they represent is a key characteristic of modern digital history in the United States. The scholarly apparatus is, in its own way, as sophisticated as would be a more aggressive use of data and statistics; but that sophistication is directed toward readers for whom the most important validation is evidence of experience, not statistical tests.

Digital History Is Humanistic Reproducibility

These omissions are not shortcomings of a field that has not reckoned with data; they are, rather, simply the latest iteration in a long scholarly tradition that uses computers to reshape and present historical evidence while consciously avoiding anything that could be mistaken for cliometrics. The path that digital historians built in the decades of the 1990s and 2000s while avoiding the shadow of cliometrics was a far more interesting one; unlike English-department digital humanities of the period, it had no motive to make history more “scientific” and instead found ways to make historical practice live on computers and—increasingly—online.

The most successful of these were efforts around digital public history, where historians found ways to bring materials into an online setting that was not amenable to books. Daniel Cohen and Roy Rosenzweig’s influential 2006 book Digital History bore the words “a guide to gathering, preserving, and presenting the past on the web” as a subtitle. The book’s vision of digital history highlighted the field as involving collecting, building an audience, and designing collections for an online audience, not engaging in statistical argument. Omeka, the public history content management system developed at George Mason University Center for History and New Media, is surely the most important and irreplaceable project out of the American digital humanities. If the problem of TotC was the division into two books—flashy narrative in one, supporting apparatus in another—the more recent works in the disciplines have solved the problem in their own ways. Social science history has doubled down on methods, producing papers that are, to borrow a phrase from the economist Suresh Naidu, “a fortress around a fact.” Historians, as much as anyone outside of digital journalists, have been thinking about audience, narrative, and publics. In doing this, they have found ways to foreground and expose sources alongside their work: rather than detailing a single argument, they have created richer ways of building ways to interact with the historical record (Brennan, “Public First”; Leon, “Complexity and Collaboration”).

One useful way to think about this split—and the problems addressed by the Americanist digital tradition—is in terms of reproducibility. The sciences have been plagued with crises of reproducibility and deluged with schemes for solving it. Literary and library scholars in the digital humanities (overmuch, to my mind) see one of their challenges as fully fixing those problems before they even emerge through a tangle of IPython notebooks, online linked open data, and Docker configuration files. There are, of course, crises of historical reproducibility as well—TotC may have seen controversy, but it was allowed to keep its Bancroft Prize. A more traditional work of history, Michael A. Bellesiles’s Arming America, had the 2001 Bancroft Prize revoked over concerns that many of the supporting documents cited might not actually exist.

But one reason that archival historical narratives work is that the historical narrative is itself an artifact of reproducible research; you read some general arguments at the front, but it is only in the process of reading a book that the reader internalizes its individual narrative flow. The thing that we need to think more about is how to shape narratives around historical data that allow reproduction by actual historians, not econometricians. How do we make that persuasive flow work for a public that can read words, visualizations, and data but does not demand all knowledge to meet social-scientific standards of causal inference.

While American digital history has not fully answered these questions, work like Anbinder’s shows that it is reaching toward a constellation of solutions that are related to new forms of digital publication exploding across the internet. In digital journalism, work in the emergent genre of “scrollytelling” often tells complex stories with data by integrating argument with visualization.5 They are also becoming increasingly common in the areas of computer science, in which publications like Google’s Distill elevate the “interactive explorable” to the level of a scholarly product. The conventions around this work in news media and the sciences differ; “scrollership” in the humanities (Schmidt) needs space to develop its own forms of writing and reading.

And this is a contribution that digital humanities projects can continue to make—where reproduction means facilitating the arrangement and exposure of multivalent primary sources, allowing readers to engage with evidence and change the assumptions of models. This is harder than just distributing models; it is about working with sources in the indefinitely reconfigurable ways that are now possible.

We see this happening already, even in the still-vibrant digital historiography of slavery itself. Ed Baptist is one of the participants in the database of ads seeking escaped slaves, Freedom on the Move, that Vanessa Holden and Joshua D. Rothman explore in chapter 11 in this volume. There are multiple works on documenting lynchings (Brown, “Printing Hate”; Burnham, “Civil Rights and Restorative Justice”; Stevenson, “Lynching in America”; Franzosi, De Fazio, and Vicari, “Ways of Measuring Agency”). Flagship digital humanities projects like the Colored Conventions project at the University of Delaware (https://coloredconventions.org) focus on surfacing documents and agency, not constructing models of human action.

More broadly, a wide variety of work is not about data per se, expanding the notion of what digitally oriented scholarship can be. If I had to single out a single institution today, I would point to the University of Richmond, with work like the masterful American Panorama, an atlas of U.S. history edited by Robert Nelson and Edward Ayers, or Lauren Tilton and Taylor Arnold’s work on the Photogrammar (https://photogrammar.org), a project about already digitized photos at the Library of Congress. And work that does use data, like Robert Lee and Tristan Atonahe’s accounting of how American universities benefited from land expropriated from Native Americans in the nineteenth century (Lee et al., “Land Grab Universities”), models a practice of presenting datasets not as statistical aggregates but as individual points that would be overwhelming in their number in any medium other than the digital.

This kind of attention to audience, to reordering, and to narrative engagement evolved in part because of the weight of TotC. This—not warmed-over introductory econometrics—is the real contribution to intellectual life that digital humanities stands to make. And it is one that historians, with their multiple sources and strong subfield of public history, are better positioned to execute than any other field in the digital humanities—although not exclusively, I would add. North American literary scholars like Stephen Ramsay and the late Stefan Sinclair have also impressively modeled a form of literary engagement based on exploration, interaction, and play (Ramsay, Reading Machines).

The language of scientism—of which the emphasis on “intervention” and “argument” are, I would argue, part—dangles a promise that we can finally end conversations in the humanities by providing firm quantitative evidence. The rich public digital tradition in history offers, by contrast, a set of tactics and models for using the new media of communication to make the historical past more present and more reconfigurable and to help it speak to a variety of different circumstances. This work—computational but not quantitative, historical before it is historiographical—may not be the future of the past, but I hope it will be.

Notes

  1. 1. For the attitudes toward proof of the old humanistic tradition, see David Hoover, “Argument, Evidence, and the Limits of Digital Literary Studies.” On the European tradition, see the “Proceedings of the Workshop on Computational Humanities Research (CHR 2020) Amsterdam, the Netherlands, November 18–20,” CEUR Workshop Proceedings 2723 (2020), http://ceur-ws.org/Vol-2723/. When the American Historical Association convened a panel on “Computational Cultural History” in New York in 2020, only one of the four panelists was actually an American trained in history; two were European and one a scholar of literature.

  2. 2. I thank Lauren Tilton for referring me to Swierenga’s work, which includes the reference to Murray.

  3. 3. Trevon Logan (@TrevonDLogan), “As promised . . . a thread on how I teach US slavery in my course on American economic history,” Twitter, February 18, 2018, https://twitter.com/TrevonDLogan/status/965262267267846149.

  4. 4. This map is not even hosted by the American Historical Review itself; instead, it sits on a George Washington University server (https://map.beyondragstoriches.digital.library.gwu.edu/?annotation=6&nycbounds=40.728,-74.02%7C40.71,-73.99). The security certificate for the website expired on September 17, 2022; as of January 9, 2024, the site remains accessible only to users willing to ignore their web browser’s warnings about attackers who might steal their personal information.

  5. 5. For an overview, see Bill Shander’s “The Past, Present, and Future of Scrollytelling.” See also Badger et al., “Income Mobility Charts for Girls, Asian-Americans and Other Groups. Or Make Your Own”; Wu et al., “How the Virus Got Out”; and Ford, “What Is Code?”

Bibliography

  1. Anbinder, Taylor, Cormac Ó Gráda, and Simone A. Wegge. “Networks and Opportunities: A Digital History of Ireland’s Great Famine Refugees in New York.” American Historical Review 124, no. 5 (December 2019): 1591–1629. https://doi.org/10.1093/ahr/rhz1023.
  2. Ash, Elliot, Daniel L. Chen, and Suresh Naidu. “Ideas Have Consequences: The Effect of Law and Economics on American Justice.” SSRN Scholarly Paper. June 26, 2017. https://papers.ssrn.com/abstract=2992782.
  3. Badger, Emily, Claire Cain Miller, Adam Pearce, and Kevin Quealy. “Income Mobility Charts for Girls, Asian-Americans and Other Groups: Or Make Your Own.” New York Times, March 27, 2018. https://www.nytimes.com/interactive/2018/03/27/upshot/make-your-own-mobility-animation.html. 
  4. Bellesiles, Michael A. Arming America: The Origins of a National Gun Culture. New York: Alfred A. Knopf, 2000.
  5. Blevins, Cameron. “Digital History’s Perpetual Future Tense.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press, 2016. http://dhdebates.gc.cuny.edu/debates/text/77.
  6. Brennan, Sheila. “Public First.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press, 2016. http://dhdebates.gc.cuny.edu/debates/text/77.
  7. Bridenbaugh, Carl. “Presidential Address. American Historical Association Meeting, the Conrad Hilton Hotel, Chicago, Illinois, December 29, 1962.” American Historical Review 68, no. 2 (January 1963): 315–31. https://www.historians.org/about-aha-and-membership/aha-history-and-archives/presidential-addresses/carl-bridenbaugh.
  8. Brown, DeNeen. “Printing Hate.” 2021. https://lynching.cnsmaryland.org/.
  9. Burnham, Margaret. “Civil Rights and Restorative Justice.” https://crrj.org/reading-room/.
  10. Carroll, Joseph, Dan P. McAdams, and Edward O. Wilson. Darwin’s Bridge: Uniting the Humanities and Sciences. Oxford: Oxford University Press, 2016. http://books.google.com?id=UxA9DAAAQBAJ.
  11. Cohen, Daniel J., and Roy Rosenzweig. Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web. Philadelphia: University of Pennsylvania Press, 2006.
  12. Colored Conventions Project, University of Delaware. https://coloredconventions.org/.
  13. Da, Nan Z. “The Computational Case against Computational Literary Studies.” Critical Inquiry 45, no. 3 (March 2019): 601–39. https://doi.org/10.1086/702594.
  14. English, James F., and Ted Underwood. “Shifting Scales: Between Literature and Social Science.” Modern Language Quarterly 77, no. 3 (September 2016): 277–95. https://doi.org/10.1215/00267929-3570612.
  15. Fogel, Robert, and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. 2 vols. New York: W. W. Norton, 1974.
  16. Ford, Paul, “What Is Code?” Businessweek, June 11, 2015. https://www.bloomberg.com/graphics/2015-paul-ford-what-is-code/.
  17. Franzosi, R., G. De Fazio, and S. Vicari. “Ways of Measuring Agency: An Application of Quantitative Narrative Analysis to Lynchings in Georgia (1875–1930).” Sociological Methodology 42, no. 1 (2012): 1–42. https://doi.org/10.1177/0081175012462370.
  18. Funk, Kellen, and Lincoln Mullen. “The Spine of American Law: Digital Text Analysis and U.S. Legal Practice.” American Historical Review 123, no. 1 (2018).
  19. Gallon, Kim. “Making a Case for the Black Digital Humanities.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press, 2016. http://dhdebates.gc.cuny.edu/debates/text/77.
  20. Goldin, Claudia. “Cliometrics and the Nobel.” Journal of Economic Perspectives 9, no. 2 (June 1995): 191–208. https://doi.org/10.1257/jep.9.2.191.
  21. Gottschall, Jonathan. “Afterword.” In Darwin’s Bridge: Uniting the Humanities and Sciences, edited by Joseph Carroll, Dan P. McAdams, and Edward O. Wilson, 217. Oxford: Oxford University Press, 2016.
  22. Gutman, Herbert George. Slavery and the Numbers Game: A Critique of Time on the Cross. Urbana: University of Illinois Press, 1975.
  23. Haskell, Thomas L. “The True & Tragical History of ‘Time on the Cross.’” New York Review, October 2, 1975. https://www.nybooks.com/articles/1975/10/02/the-true-tragical-history-of-time-on-the-cross/.
  24. Hilt, Eric. “Economic History, Historical Analysis, and the ‘New History of Capitalism.’” Journal of Economic History 77, no. 2 (June 2017): 511–36. https://doi.org/10.1017/S002205071700016X.
  25. Hoover, David. “Argument, Evidence, and the Limits of Digital Literary Studies.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press, 2016. http://dhdebates.gc.cuny.edu/debates/text/77.
  26. Johnson, Jessica Marie. “Markup Bodies: Black [Life] Studies and Slavery [Death] Studies at the Digital Crossroads.” Social Text 36, no. 4 (137) (December 2018): 57–79. https://doi.org/10.1215/01642472-7145658.
  27. Lee, Robert, Tristan Ahtone, Margaret Pearce, Kalen Goodluck, Geoff McGhee, Cody Leff, Katherine Lanpher, and Taryn Salinas. “Land Grab Universities.” High Country News (2020). https://www.landgrabu.org/.
  28. Leon, Sharon. “Complexity and Collaboration.” In The Oxford Handbook of Public History, edited by Paula Hamilton and James B. Gardner. Oxford: Oxford University Press, 2017.
  29. Leon, Sharon, and Tom Scheinfeldt. Bracero History Archive. https://braceroarchive.org/.
  30. Mawdsley, Evan, and Thomas Munck. Computing for Historians: An Introductory Guide. Manchester: Manchester University Press, 1993.
  31. Murray, William G. “An Economic Analysis of Farm Mortgages in Story County, Iowa, 1854–1931.” Iowa Agriculture and Home Economics Experiment Station Research Bulletin 12, no. 156 (1933): art. 1.
  32. Nelson, Robert, and Edward Ayers, eds. American Panorama. University of Richmond. https://dsl.richmond.edu/panorama/.
  33. Ngu, Ash, and Sophie Cocke. “Hawaii’s Beaches Are Disappearing.” ProPublica/Honolulu Star-Advertiser, December 29, 2020. https://projects.propublica.org/hawaii-beach-loss/.
  34. Olmstead, Alan L., and Paul W. Rhode. “Cotton, Slavery, and the New History of Capitalism.” Explorations in Economic History 67 (January 2018): 1–17. https://doi.org/10.1016/j.eeh.2017.12.002.
  35. Parry, Marc. “Quantitative History Makes a Comeback.” The Chronicle of Higher Education, February 25, 2013. https://www.chronicle.com/article/Quantifying-the-Past/137419.
  36. Ramsay, Stephen. Reading Machines: Towards an Algorithmic Criticism. Champaign: University of Illinois Press, 2011.
  37. Ruggles, Steven. “The Transformation of American Family Structure.” American Historical Review (February 1994): 103–28.
  38. Schmidt, Benjamin. “Scrollership: A New Word for Some New Ways of Writing.” In DH2020 Book of Abstracts, edited by Laura Estill, Jennifer Guiliano, and Constance Crompton. Alliance of Digital Humanities Organizations, 2020.
  39. Shander, Bill. “The Past, Present, and Future of Scrollytelling.” Nightingale: Journal of the Data Visualization Society, August 25, 2020. https://nightingaledvs.com/the-past-present-and-future-of-scrollytelling/.
  40. Silbey, Joel H. “Clio and Computers: Moving into Phase II, 1970–1972.” Computers and the Humanities 7 (1972): 67–79.
  41. Stevenson, Bryan. “Lynching in America.” 2015. https://lynchinginamerica.eji.org/about.
  42. Swierenga, Robert P. “Clio and Computers: A Survey of Computerized Research in History.” Computers and the Humanities 5, no. 1 (1970): 1–21.
  43. Thomas, William G., III. “Computing and the Historical Imagination.” In A Companion to Digital Humanities (Blackwell Companions to Literature and Culture), edited by Susan Schreibman, Ray Siemens, and John Unsworth. Oxford: Blackwell Publishing Professional, 2004. http://www.digitalhumanities.org/companion/.
  44. Thomas, William G., III, and Edward L. Ayers. “The Differences Slavery Made: A Close Analysis of Two American Communities.” American Historical Review 108, no. 5 (December 2003).
  45. Wu, Jin, Weiyi Cai, Derek Watkins, and James Glanz. “How the Virus Got Out.” New York Times, March 22, 2020. https://www.nytimes.com/interactive/2020/03/22/world/coronavirus-spread.html.

Annotate

Next Chapter
Why Does Digital History Need Diachronic Semantic Search?
PreviousNext
Copyright 2024 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org