The epistemological basis for the modern critical edition is fundamentally taxonomic: it assumes the notion of prior simplicity, whereby in a vertical fashion the proliferation of textual variants, which are naturally distributed across manuscripts, and are inherent in the very idiosyncratic nature of manuscript production, all descend from an original common source. Also generally assumed is a monogenetic origin from a single parent. Both assumptions prove to be highly problematic for understanding medieval Arabic and Persian book culture. The messy reality of multiple recensions that inhabit medieval manuscripts as testaments to the collaborative process of textual production may be, in part, preserved in the form of a critical apparatus within an edition. In the process of mechanical reproduction, this multivalent record of dissemination is displaced largely into the space of the margins. However, as with any act of translation, the technology of the printed page produces both a surplus and deficit of meaning. While codicological cacophony may be lost or marginalized, what is gained is the ability to telegraph this information to an even broader audience.
In this ever iterated process of surplus and deficit, we have today with many of the digitally searchable forms of Arabic and Persian medieval archival material, the complete removal of the critical apparatus, if one ever existed, and with it any semblance of this polyphonic reception history. Likewise, what is available either digitally, or in print, is usually based on the narrow selection of what has been edited. Significant parts of this reception history have been lost in the nineteenth- and twentieth-century constructions of medieval Arabic and Persian writings. Furthermore, the medium of transmission, from manuscript, to printed page, to searchable text, inevitably shapes not only what information is presented, but how it is accessed; this in turn guides both reading practices and modes of analysis. In this paper I draw on examples from medieval Arabic and Persian manuscripts, along with their print and their digital forms, to explore the process of loss and recovery structuring technologies of transmission.
Author: Travis Zadeh (Haverford College)
Scholarship on the Arab world, as in other regions, is always haunted by the absent voices of those who cannot be heard. Our understanding of events, our perspective on times and places are always skewed by the uneven record that comes to us for interpretation. At first blush it may appear that the spread of internet access and the rise of social media, and in particular Facebook whereby anyone can distribute reams of information and images globally at low or no cost mitigates this problem. However, the rise of such technologies brings their own technical and ethical challenges. I propose to address some of these challenges through a discussion of what I have described as an indigenous digital humanities project: a Facebook group called “Mukhayyam al-sumud al-usturi tal al-Za`tar.” Created by survivors and descendants of the 1976 siege and destruction of the Tal al-Za`tar refugee camp in Beirut, the site aims to serve as a node in the network of former residents of the camp who are now globally dispersed, as well as a depository for images, documents, and crowd-sourced reconstructions of memories and geographies. The site (and others like it) and its contributors may serve as a rich source for scholars interested in creating more authoritative repositories or digital reconstructions of this and other neighborhoods and towns that were erased or irrevocably altered during the violence of the Lebanese civil war. However, they, too, are marked by dominant voices and aesthetics that may skew our understanding of the past.
Author: Nadia Yaqub (Univ. of North Carolina – Chapel Hill)
In 2010, a grant from the New York University Abu Dhabi Institute launched the Library of Arabic Literature, a book series that aims to publish key works of pre-modern and classical Arabic literature in bilingual editions, with the Arabic edition and English translation on facing pages. The General Editor of the series is Philip Kennedy, Associate Professor of Arabic at New York University, who is aided by an eight-member Editorial Board consisting of scholars of Arabic and Islamic studies. The five-year grant envisioned an initial library of thirty-five published books, with translations to be done by scholars of Arabic. It also specified an XML-first production system to ensure maximal flexibility in future digital uses of the Arabic texts and English translations. The series is published by New York University, which drew on its previous experiences in bilingual publishing through the now-defunct Clay Sanskrit Library series.
As Managing Editor of the Library, I will present in this paper an overview of the experiences of the Library of Arabic Literature series in the past two years, particularly with respect to the digitization and XML-tagging of Arabic texts. The first three books have just been published, and we are currently confronting the challenges of rendering Arabic text correctly on commercially available e-readers. Eventually, once we have a critical mass of published books, the Library of Arabic Literature hopes to make the full series accessible as a searchable electronic corpus. In this paper, I hope to highlight and share some of the insights the Editorial Board and I have gleaned through our work on this series.
Author: Chip Rossetti (Managing Editor, Library of Arabic Literature)
Latest developments in the digital sphere offered new opportunities and challenges to the humanists. Equipped with new digital methods of text analysis, scholars in various fields of humanities are now trying to make sense of huge corpora of literary and historical texts. Perhaps the most prominent of such attempts is the work of Franco Moretti and his abstract models for literary history that trace long-term patterns in English fiction. Inspired by Moretti’s approach, I seek to develop abstract models for the analysis of pre-modern Arabic historical literature, relying mainly on various textmining techniques that are being developed at the intersection of statistics, linguistics and computer science. At the moment, I concentrate primarily on biographical collections, a genre that includes several hundred multi-volume titles (The largest collection—al-Dhahabī’s “History of Islam”—covers 700 years and contains about 30,000 biographies). Working with the corpus of 10 biographical collections (about 125 printed volumes; 45,000 biographical accounts), I am developing an analytical tool that can be later used to study other biographical collections—ideally, all of them together. In the long run I hope that the results of my work will pave the way to the development of analytical tools for other genres of pre-modern Arabic literature such as chronicles, ḥadīth collections, interpretations of the Qur’ān, compendia of legal decisions, etc.
Working with my biographical collections I look primarily into such kinds of biographical data as “descriptive names” (nisbas), dates, toponyms, and, since recently, rather loosely defined linguistic formulae and wording patterns. The analysis of different combinations of these data allows one to trace various social, religious and cultural patterns in time and space. I am particularly interested in how the Islamic world changed over the period of 640–1300 CE: how cultural centers were shifting; how different social, professional and religious groups were replacing and displacing each other; how different regions were connected with each other and how these connections changed over time. The results of my analysis will be presented in the form of graphs and geographical maps (Some current examples of my work can be found at www.alraqmiyyat.org).
Author: Maxim Romanov (Univ. of Michigan)
I will discuss approaches to the digitization of Islamic books to explore its impact on Islamic and Middle East Studies, drawing on my research about the manuscript-print transition in Muslim societies within the context of technology transfer across Eurasia.
The digitization of books is widely accepted, because the digital processing of written language is merely the latest technology used for the display and storage of texts. E-books are on the verge of making the printed book an obsolete object, since for readers access to texts is all that matters. But the naturalization of the e-book is accompanied by the risk of diverting resources from the preservation of the material artifacts. Not every digital text has metadata which link the digital copy to its physical original whose whereabouts and provenance are known. Moreover, the long-time costs of digitization are rarely considered, even though the future functionality of digital surrogates, despite their immaterial appearance on our screens, depends on the continued investment into hardware and software, as well as into human labor.
The ubiquitous use of digitization by a wide range of institutions reflects that scholars, libraries, and grantmaking agencies, such as CLIR and the Imam Zayd Cultural Foundation, employ digitization for reasons quite different from those of commercial publishers. In North America the digitization of Islamic books is used to facilitate access to rare texts (e.g., Caro Minasian Collection, Hathi Trust Digital Library), to preserve endangered cultural heritage (e.g., Afghanistan Digital Library, Yemini Manuscript Digitization Initiative), or to allow for the crowdsourcing of uncataloged manuscripts (e.g., Collaboration in Cataloging). I will argue that in Islamic and Middle East Studies digitization receives little critical attention, because the access to a rare text is valued more highly than the historical interpretation of a specific book as material evidence for the transmission of knowledge.
Author: Dagmar Riedel (Columbia University)
Digitally-enabled spatial analysis can generate hypotheses, substantiate arguments, and communicate findings at a glance. In this paper, I will demonstrate how spatial analysis reveals the topography of readership in seventeenth-century Istanbul. Using WorldMap has allowed me to collate data from many different sources, including court records, probate inventories, and waqfiyyas, into a single map in order to identify larger patterns. Given the exploratory nature of the conference, I will briefly share the “messy” interim steps I took as well as the more polished maps that resulted. During the remainder of my talk, I will reflect on the promises and limitations of open-source, user-contributed mapping. WorldMap allows anyone to create a layer which can be combined with other users’ layers. In other words, it holds the potential to facilitate the kind of collaborative work that is said to be a hallmark of the digital humanities. At the same time, my experience as a consultant to digital humanities projects (while working for Ithaka before graduate school), provides some cautionary tales about the sustainability of digital projects and challenges presented by “user-generated” content.
Author: Meredith Quinn (Harvard University)