The following #MLA19 session will take place on January 3, 2019 at the Hyatt Regency Grand Suite 5 from 7-8:15 pm.
Screen/Post-Screen Digital Humanities
The screen shapes our concept of the digital, and further, the contemporary communication of graphical information. Figures like Johanna Drucker mobilize this fact toward charting fluid formations in visual epistemology–pre-screenic, screenic, and post-screenic in nature. “Images have a history,” Drucker writes, “but so do concepts of vision and these are embedded in the attitudes of their times and cultures as assumptions guiding the production and use of images for scientific or humanistic inquiry” (Graphesis 19). Drucker’s observation identifies the fact that visual epistemology and technological production are inextricably linked. The screen may be our primary means of conveying information in the present, but numerous histories of visualization prefigure and exceed the limits of the screen–“different technologies and media play their role in knowledge production as surely as do changes in optical instruments and observational techniques” (Graphesis 21). The full range of technological innovation pertaining to sight produces new cultures of knowledge production alongside their critical reception. Following this, this panel aims to stage a critical dialogue in which visual epistemology’s mediums of articulation and organizational limits are situated as the precondition for doing digital work in the present.
Digital Humanities (DH) are particularly well poised to conceptualize visual epistemology’s present material conditions. The screen/post-screen dyad invoked in the panel’s title refers to a broad range of questions that DH centers in its discourse. Consider, for example, how the authors of “QueerOS: A User’s Manual” complicate visual epistemology’s present material conditions as they call for ‘exploding the interface.’ There, the authors mobilize the interplay between ‘legibility’ and ‘illegibility’ to conceptualize screen/post-screen mediality as relational, threatening, and transformative. Jentery Sayers introductory remarks in his recently published edited collection, Making Things and Drawing Boundaries, traces a similar relation. The experimental character of the text operates between “boundaries drawn and cuts made with the logic of visibility–between what is seen or superficial and what is unseen or hidden: ‘front end’ and ‘back end,’ containers and processors, content and metadata, interface and source code” (8). So too, in Sayers’ anthology, does the experimental character of DH. In a final example, David M. Berry and Anders Fagerjord offer a different perspective on the screen/post-screen dyad in their 2017 Digital Humanities. There, the screen/post-screen dyad signals both a historical relay point from which to situate contemporary DH work and a methodological position that ultimately augments what counts as DH praxis: “digital humanities must be able to offer theoretical interventions and digital methods for a historical moment when the computational has become both hegemonic and post-screenic” (2). The screen/post-screen dyad opens to the door to critical theory in DH, but also places a high priority on computationality in the humanities more broadly.
This panel showcases three theory-driven papers that examine visual epistemology’s function and status in DH work. This panel furthers the experimental ethos of those texts cited above by complicating the relation between human and computer vision, pairing analogue technologies with digital ones, and reapproaching the question of legibility/illegibility through a visual-linguistic frame. All three papers are situated at the intersection of screen/post-screen dyad, formative of contemporary DH method.
Dr. James E. Dobson’s paper explores the screen/post-screen dyad through the interplay of human vision and computer vision. He critiques the suite of computer vision algorithms provided with the “Open Source Computer Vision” or OpenCV package (a package of algorithms that detect and surveil human faces) so as to complicate how DH approaches close and distant reading. Where computer vision functions on the basis of close and distant positionalities, human vision is mobilized in the service of computer vision; it is both a relation of confirmation and one of surveillance. As OpenCV is utilized to detect human faces, Dobson argues that “three pairs of eyes” redefine the field of vision that comprise the close/distant relation: the virtualized gaze of computer vision, the eyes of the researcher, and the eyes of the target subject-generates the ‘imaginary’ site of computer vision. Rather than indicating a neutral relationship between close and distant reading, Dobson’s argument culminates in the claim that popular computer vision algorithms are ideologically driven and draw on historically-specific accounts of meaning making.
Dr. Anne Royston’s paper argues that analog-digital literary works position themselves as meditations on the affordances and limitations of new and old media. She explores four small-edition books that blend print and digital publication, but also signify hybrid modes of communication–specifically their poetic or semantic content. Royston analyzes the problematics inherent to the screen/post-screen dyad in her criticism of the texts’ media-specificity. She argues that each text creates an extended chain of reading, requiring QR or AR readers as well as human readers, as the narrative extends beyond both page and screen. Further, these texts evoke questions of interactivity and embodiment as multiple iterations of interactivity are presented to the reader. The paper culminates in a discussion of DH’s conceptualization of embodiment–how the hand manipulates both page and screen–and thus how the screen/post-screen dyad marks an additional material phase of visual epistemology’s operation: that of haptics in a technologically rich narrative landscape.
Dr. Matt Applegate’s paper identifies Unicode exploits in the production of text-based glitch art, framing the practice as a techno-linguistic expression of code-switching. He considers questions of legibility and illegibility by drawing on Nick Montfort’s figuration of ‘obfuscated code’ and Michael Mateas’ concept of ‘weird languages.’ Both concepts operate via multiple linguistic functions, performing tasks in two or more coding schemes simultaneously. These tasks render the linguistic possibility of the code apparent to the viewer, something that typically goes unseen. So too does Unicode. Unicode can be manipulated to create text-images that exceed and break the function of our contemporary interfaces. He argues that this visual-linguistic feature of the interface demands that one conceptualize visual organization and text as a kind of hybrid-grammar. He concludes by arguing that when text-based glitch art explodes the interface, new visual-linguistic schemes are possible, and thus new modes of articulating the difference between legibility and illegibility.
James E. Dobson
“A View from Where? Situating Computer Vision for the Digital Humanities”
Abstract: Building on my previous work on the k-nearest neighbor algorithm (Critical Digital Humanities: The Search for a Methodology, Illinois UP, forthcoming), in this talk I will give a critique of the suite of computer vision algorithms provided with the “Open Source Computer Vision” or OpenCV package. I will first frame the vision of computer vision as a combination of what we might call close and distant reading. These approaches are becoming increasingly popular in general digital culture and in methods used in the digital humanities-from image processing to page image feature analysis. Computer vision complicates the screenic/post-screenic divide in that these techniques work at the image level yet they depend upon human visual feedback to confirm correct operation or to modify behavior. One of the primary uses of these approaches has been the detection of faces and their most important set of features, the eyes. This grouping of three pairs of “eyes”-the virtualized gaze of computer vision, the eyes of the researcher, and the eyes of the target subject-generates the “imaginary” site of computer vision. Rather than providing a neutral, descriptive account of image space, popular computer vision algorithms such as Scale Invariant Feature Transform (SIFT) and homographic approaches to feature matching draw on historically-specific accounts of meaning making. Finally, I’ll provide a critique of OpenCV’s generation and distribution of face training datasets produced via the Viola-Jones algorithm using HAAR cascades.
“Between Page and Screen, Hands: Analog-digital works and their embodied readers”
The gap that appears to separate old and new media appears, at times, unbridgeable. In contemporary poetics, however, a recent subset of work challenges that gap through its networked and hybridized forms. Combining analog technologies (letterpress printing, hand-binding) and digital processes (QR or AR capabilities), analog-digital works position themselves as meditations on the affordances and limitations of new and old media. This paper draws together four such pieces: Amaranth Borsuk and Brad Bouse’s Between Page and Screen, Chris Fritton’s Why We Lose Our Hands, J.R. Carpenter’s The Broadside of a Yarn, and Emily Dyer Barker’s “Public Spectacle Essays.” Produced as broadsides or small-edition books by independent artists or small presses, each work blends print and digital practices, exploring alternative and hybrid modes of communication through its materiality and even its poetic or semantic content. They create an extended chain of reading, requiring QR or AR readers as well as human readers. Moreover, the digital in these works goes beyond emphasizing the screen, instead modelling technology as an inevitably haptic endeavor. Analog-digital works model a larger discussion for DH that emphasizes the role of embodiment in any technology: the hands that manipulate page and screen.
“Exploit, Code-Switch, Glitch”
This paper theorizes the production of text-based glitch art as a practice of code-switching, a linguistic problem that is situated somewhere between thinking code’s representational value on the surface, and theorizing the form and organization of our contemporary interfaces. When Unicode encodings from one natural language system are replaced with another, the reading surface often breaks downs–Unicode characters can be stacked or closely aligned, allowing their linguistic counterparts to stack, spread, and cohere in a manner that is indifferent to the text box. Akin to what Nick Montfort calls “obfuscated code” and what Michael Mateas calls “weird languages”–“exploits the syntactic and semantic play of a language to create code that, often humorously, comments on the constructs provided by a specific language”–text-based glitch art draws out what is unseen and hidden (the function of source code) by exploding the interface (Mateas). How can interfaces function if text is able to disarticulate its system of organization? Positioning this exploit as a practice of code-switching demands that one conceptualize the interplay of interface and text as a kind of visual hybrid-grammar, where code, natural languages, and their interfaces are co-implicated in the visual communication of information. In so many words, when text-based glitch art explodes the interface, new visual-linguistic schemes are possible, and thus new modes of articulating the difference between legibility and illegibility via their technological instantiation.