The Knotted Sign: Poetics of Illegibility

El signo anudado: poética de lo ilegible

Elvira Blanco Santini (Investigadora independiente)

Artículo recibido: 09-03-2017 | Artículo aceptado: 18-04-2017

RESUMEN: Se podría argumentar que la legibilidad precede a cualquier preocupación por la poética, porque: ¿cuáles son las poéticas de algo que no podemos entender? Sin embargo, nuestra interacción con la tecnología digital nos expone constantemente a la ilegibilidad intrínseca a sus operaciones. El objetivo de este ensayo es reflexionar sobre la ilegibilidad desde tres perspectivas: la definición de la legibilidad como un régimen cultural eurocéntrico, la exploración de la poética de lo legible por máquina frente a lo legible por el hombre y la proposición de que estamos ante un régimen cada vez más ubicuo de ilegibilidad, que no se limita a la escritura. Después de esta revisión vagamente cronológica de la historia moderna de la ilegibilidad, intentaré responder: ¿Qué puede significar lo ilegible como recurso expresivo?
ABSTRACT: One might argue that legibility precedes any concern about poetics, because: What are the poetics of something we cannot understand? However, our interaction with digital technology constantly exposes us to the illegibility intrinsic to its operations. The aim of this essay is to reflect on illegibility from three perspectives: the definition of readability as a Eurocentric cultural regime, the exploration of the poetics of the machine-readable as opposed to the human-readable, and the proposition that we are facing an increasingly ubiquitous regime of illegibility that is not limited to writing. After this vaguely chronological review of the modern history of illegibility, I will attempt to answer: What can the unreadable mean as an expressive resource?

PALABRAS CLAVE: Ilegibilidad, poética, reconocimiento óptico de caracteres, glitch, aprendizaje automático
KEY WORDS: Illegibility, poetics, optical character recognition, glitch, machine learning


1. The Regime of the Legible

In «The Encyclopedist and the Peruvian Princess,» an essay included in The History of the Book and the Idea of Literature, researcher and professor of French culture and literature Lorraine Piroux proposes that the «regime of legibility» was consolidated in Europe with the French Enlightenment, along with the physical changes that the book as medium and object underwent at the time (2006: 107). Books were being made in portable sizes, and printed with clearer structures thanks to typographical technologies; certain mechanisms of text organization «made the semantic architecture of the text immediately available» (Piroux, 2006: 107) such as indentation and dashes to indicate dialogue. In this regard, Piroux refers to the deep reach of legibility: how, through formal values that impact the mode of reading, access to the «substance» of the text is facilitated (108). For the Encyclopedists, the contents of a book should be as transparent as possible, without linguistic excesses, written in clear philosophical language, with systematic definitions and, if possible, accompanied by illustrations. Piroux says:

The efforts of the writers, the editors, and the publishers of the period to develop unprecedented standards of legibility represented something more than formal or technical innovations… They demonstrate the belief that the success of the Enlightenment project rested on the printed book’s ability to bring its readers into close and unhindered proximity with thought and ideas, or, to put it differently, on its ability to create the illusion of a purely semantic text. (2006: 108)

Thinking through the regime of the legible, it would appear that the Encyclopedists supported the publication of texts so transparent that they might render the materiality of the written sign invisible. In their eagerness to adapt writing to the book format, and to shorten writing as much as possible to save space, the West banished symbols and gave primacy to the «signified object»: the notion of a metaphysical text, «a derealized, disincarnated, and invisible verbal sign» (Piroux, 2006: 12) without the excesses of «literariness.» In response to the imperative of transparency, adds Piroux, some writers began to embrace the materiality of the written sign through literature, taking inspiration from forms of Mesoamerican, Inca and ancient Egyptian writing (110).

FIGURE 1
Figure 1. Khipu – Universidad de San Martín de Porres, Lima, Perú (Source: Khipu Database Project <http://khipukamayuq.fas.harvard.edu/>)

To demonstrate this in symbolic forms, Piroux refers to French writer Françoise de Graffigny in relation to the quipu, which plays a key role in her novel Lettres d’une Peruvienne (1747), about an Inca princess who learns the Western alphabet in France and forsakes native forms of communication. The quipu is a textile record-keeping device, usually made with cotton or gut strings. According to letters from the Spanish Colonization and some later accounts, the Incas knotted the strings of the quipu to record quantitative data (census, taxes), as well as songs, genealogies, and other types of narratives. In addition, experts say that the values recorded in quipu cannot be literally translated to Quechua; they believe their use was completely mnemonic, sensory, and nonverbal (Urton, 1998: web). What could be further from the purely semantic text that the Encyclopedists dreamed of? In the quipu, content (the meaning) is inseparable from the medium (rope) –language and its materiality are knotted together. In alphabetic writing, materiality (the paper or surface) is metaphysically separate from language (the letter that is written or carved on the surface): «alphabetic script reduces the text to some thing of a trace, infinitely closer to thought than to the paper object that receives it» (Piroux, 2006: 118). In Graffigny’s novel, when the Inca princess adopts the economy of Western language, she also renounces the possibility to express her story and feelings, leaving behind the poetic, «literariness.»

Piroux’s considerations are a starting point to reflect on the illegible. She speaks of it not as something unfathomable, but as a script that does not conform to the regime of transparency established during the Enlightenment. As visual or pictorial language forms were «opaque,» did not have an alphabet, and had to be deciphered, they immediately made apparent the materiality and the «literariness» of the text; this is entirely opposed to the idea of legibility as a practically immaterial reading. On the other hand, based on Graffigny’s observations, we could also affirm that there is something poetic in the unreadable.

2. The Regime of the (De)Codifiable

What we consider illegible is illegible to whom? There is something essentially colonialist in the characterization of non-Western forms of writing as opaque and blocking access to thought, versus the supposed inherent readability of the Western alphabet. These perceptions operate within a Eurocentric view of language. Therefore, it seems nonsensical to discuss the possibility of some mode of formal writing being intrinsically legible while others are not. However, a discussion can emerge from current situations in which what can be read is compromised when compared to computational processes.

Virtually any operation involving calculation or writing happens on a computer at this time. I write this essay with a QWERTY keyboard on a laptop, and there is no doubt that the bulk of what is written today requires a word processor. There is no space here for an archeology of the interface or a history of software, but it should be noted that if we could read the information that a computer processes without an interface, it would be quite difficult to find meaning in it: from zeros and ones to electrical impulses, we see how language becomes energy and numbers. The interface exists to render a literate human-readable version of computer operations. These respond, in turn, to our commands: a command decoded by a processor and encoded as a communicable product.

Plenty of examples illustrate the decoding and encoding processes that happen through computer programs. I will take one in which the unreadable is particularly visible. Ocrad.js is an OCR (Optical Character Recognition) that converts scanned images of text to text again. This is useful, for instance, to convert a scanned text into an editable document or, on the other hand, to violate safety barriers implemented with CAPTCHAs. Depending on its engine, Ocrad.js has the ability to «learn» new languages and identify similarities between letters, but it also has serious limitations. Often, what is easily readable to the human eye is not entirely readable to the program, which goes blank. Large discrepancies can also exist between the input and the reading done through OCR. This generates interesting questions: Is it possible for a machine to do a «bad» or «good» reading of a text, since it is oblivious to semantic value? Could we say that «readable» is «decodable,» and so the illegible is something we are not «programmed» to decode? We often use terms like «the computer reads» (a file). If we continue this line of questioning, we will inevitably ask what it means for a program to understand something, which is an issue that exceeds the limitations of this essay. However, two things are worth noting: the OCR program decodes alphabetic text and returns it as alphabetic text, but it can also identify («read») letters where, for the purposes of human intelligence, there are none.

The following images show some of my interactions with OCR, exploring its capacities and mistakes:

FIGURE 2
Figure 2. Here, I have written «poética,» but the program recognized «pOÉfilp»

FIGURE 3
Figure 3. Here I drew a squiggle that was recognized as the letter «m.»

FIGURE 4
Figure 4. The program did not recognize any letter when I wrote «lee.»

Reverse OCR is a Twitter bot created by artist Darius Kazemi, whose personal username is, quite appropriately, @tinysubversions <http://tinysubversions.com/>. A bot is an application that performs automated and usually repetitive tasks on the Internet, from tweeting a phrase or hashtag over and over to playing mahjong with human beings.

As its title suggests, Kazemi’s project employs Ocrad.js in reverse, which allows us to observe the OCR-based «reading» from another point of view. Reverse OCR chooses a word from its repertoire and starts drawing random lines until Ocrad.js recognizes the word successfully. Then the bot tweets a snapshot of what it «wrote» along with the actual word. In virtually all cases, the bot’s writing is completely unintelligible for humans, and yet we know Ocrad.js was able to relate it to a real word.

Below are some examples taken from @reverseocr and the Reverse OCR Tumblr <http://reverseocr.tumblr.com/>:

FIGURE 5
Figure 5. “Subtlety”

FIGURE 6
Figure 6. “Diaspora”

FIGURE 7
Figure 7. “Haiku”

FIGURE 8
Figure 8. “Brethren”

Kazemi calls himself an Internet artist. Most of his projects are generators and art bots that work on Twitter or Tumblr. The «universe» of art bots on Twitter is quite diverse, but they generally work in two ways: some operate through interaction with users or other bots, while others are «self-sufficient» and simply run their algorithm through a database, like Reverse OCR.

By the very nature of their programming, the poetics of bots tend to be based on repetition and permutation. Once we know how Kazemi’s bot works, we can try to imagine how many attempts it made to achieve the recognition of each letter; personally, I find this a somewhat alienating idea, as it evokes the randomness and repetitiveness of the process –both notions that I associate with computation. In any case, Reverse OCR evinces that, since we speak of the machine’s ability to «read,» legibility is no longer culturally considered exclusive to human intelligence. The execution of automated tasks that we think we understand involves human-unreadable decoding and encoding processes.

The notion of legibility within the regime of the (de)codifiable is closely related to the ability to decode certain languages. In fact, the notion of meaning in the context of the legible can also be extended to encompass the «experience» of machines. A Python script, for example, is written with alphabetic characters that humans can read. Still, the optical reading that a person can make of that script does not yield the same meaning as that of the program –for the program, a successful reading is manifested in carrying out a command. A programmer might even understand what the script is designed to achieve, but her reading cannot access the «substance» of that particular text.

The issue of illegibility in the context of computation can be even more complex. We might say that there is a resonance between the Encyclopedists’ imperative of semantic transparency and what constitutes an «elegant» programming code: accuracy, clarity, and absence of «ornaments» that hinder the effective implementation of a task. As language and materiality are separated in alphabetic writing, and while in Pre-Hispanic visual languages the sign was inseparable from its materiality, in computerized languages the sign is tied to its execution. A mistake in a given code renders it unreadable to the machine, and the task cannot be performed: it is a failure of language. However, in the margins and grey areas of faulty executions and half readings, our eyes have been trained to read and deal with error –even to turn it into an expressive resource.

3. Expression Through Error

In this section, I will focus on the unreadable expressions of technology, specifically those that are not entirely understood by humans, or that are at least resignified by them. The digital reading error is impractical, but to stumble upon one is to discover a small subversion of a process expected to be effective (as effectiveness is the purpose of automation); this is precisely why it generates fascination.

In his article «Aesthetics of the Error: Media Art, the Machine, the Unforeseen, and the Errant,» University of Glasgow Professor Tim Barker argues that the postdigital era is marked by the condition for error:

In the condition where machinic systems seek the unforeseen and the emergent, there is also a possibility for the unforeseen error to slip into existence. This condition can be seen in the tradition of artists using the error […] as a creative tool. (Barker, 2011)

In an interview with Magda Tyzlik-Carver for the blog ecologies of intimacy, digital artist Miyö van Stenis reflects on the practices that take advantage of this «condition for error» to make art:

From the philosophical position, I believe that “the error” or “glitch” is the clearest meeting point between humans and machines/technology. Technology reflects the fact that humans want to create perfection, something that works in harmony with our commands and no matter what it always is expected to look and work perfectly to our satisfaction. If Nietzsche and Hakim Bey questioned the need for God, why can’t we play to destroy the proud son of human beings, the extensions of our senses and ironically what controls us [sic]. This glitched relationship is a perfect dialectic, see beauty when all fails. (Van Stenis, 2016: web)

In short, technical error can be a means of expression, and many digital artists use the unreadable/defective to build their discourse.

ELIZA was a chatterbot written by Joseph Weizenbaum between 1964 and 1966. It ran a script called DOCTOR through which it pretended to be a psychotherapist, using basic natural language processing and a short repertoire of responses to interact with its «human patient.» If it exhausted its repertoire, ELIZA simply resorted to a generic answer: for instance, if the user wrote «My head hurts,» ELIZA answered «Why do you say that your head hurts?» It remains a popular case study, not only because it was one of the first of these chatterbots, but also because, though they knew that it was a program, most users could not help relating to it as if it were a real, human therapist.

Artist Daniel Temkin wrote the Entropy programming language in 2010. In his own words, he wanted to explore how programming reinforces compulsive habits, so he created a language in which data would gradually decay, forcing the programmer to forsake precision and control as chaos ensued. Subsequently, Temkin decided to rewrite ELIZA using Entropy, while maintaining its logic and austerity intact.

Below is a screenshot of a brief conversation with Drunk Eliza:

FIGURE 9
Figure 9. Drunk Eliza

Drunk Eliza gives the same laconic answers as the first Eliza, but makes typing «mistakes» that create the illusion of being under the influence of alcohol; even her mistakes make sense within the QWERTY scheme (the wrong letters are not so distant from the correct ones). As the conversation with this ELIZA develops, its writing becomes increasingly erratic, to the point that it becomes difficult to understand its sentences.

Drunk Eliza is built with a language designed to reach illegibility. While its implementation is correct, Temkin’s ultimate goal is to generate an increasingly opaque reading experience (until, in fact, the bot collapses). Thus, Temkin subverts our expectations of a chatterbot’s a programming language –to generate or maintain a conversation, not complicate it–. At the same time, he humanizes ELIZA’s error: though whether machines can think or learn is still being debated, it is less common to discuss whether they can get drunk. Drunk Eliza reads and decodes our statements, but in its inebriation finds it difficult to codify appropriate responses.

Although not exclusively related to writing or reading text, I wish to dwell on the glitch because glitch art practices allow us to detail the use of technical error as an expressive resource. Generally speaking, the computational glitch is a minor operation error. It does not preclude the use of the device or program, but hinders their success. Although a glitch is not necessarily visual, it is common to associate the term «glitch» with graphic glitches: aberrant lines, stacked-up characters, blocks of color, frozen motion, misshapen textures, and any other elements that distort the image to some degree. Glitch art employs the aesthetics of the digital graphic error as a means of expression through the intentional corruption of a file.

FIGURE 10
Figure 10. Movie theater glitch. Courtesy of Juan Manuel Acosta.

FIGURE 11
Figure 11. Laptop glitch. Courtesy of Jesús Torrivilla.

FIGURE 12
Figure 12. BlackBerry camera glitch. Courtesy of Alissa Lovera.

If the corruption of a digital image is an artistic technique, each glitch artist builds her own discourse through it (see below, for instance, a work by Corina Lipavsky). However, one must also consider that this technique is part of the corruption/intervention of a symbolic universe that transcends individual discourse: code as language, interface as a medium that can be read. In his article «gli†CHED IN †RAN$LA†ION: Reading †ex† and Code as a Play of $Paces,» Matt Applegate, professor of Digital Humanities at Molloy College, argues:

The visual representation of glitch art is the simultaneous ambiguation and disambiguation of code’s seamless operation. This is to say that code’s work is simultaneously obfuscated and made manifest where the glitch is made visible, subjecting both the visualization and obfuscation of its function to interpretation as an aesthetic process. (2016: web)

Similar to how Reverse OCR manifests the «reading»(and writing) of a program, the glitch expresses those same processes framed within the condition for error. The difference is that, while each of Reverse OCR‘s drawings is a well-executed task, each glitch is a failure. Therefore, glitch art makes use of a poetics of the error, which in turn implies a two-way illegibility: within the machine producing the glitch and, in the case of graphic glitch, on the screen or surface where we see it. As we address glitch art as a practice with a series of strategies and techniques that generate intentional errors, we might say that the discourse of illegibility underlies the individual discourse of the artist.

FIGURE 13
Figure 13. Nostalgia © Corina Lipavsky (2014)

4. Reading Noise

I have referred to the reading of the machine as an action that involves processing a series of commands and executing them. When an error occurs in this process, we can speak of a machine-unreadable command and/or a human-unreadable result. But what happens when we are required to interpret something that is illegible both to the machine and to the human eye? It also happens that some machines study the unreadable.

In «A Sea of Data: Apophenia and Pattern Misrecognition,» German researcher and artist Hito Steyerl argues that, as we are surrounded by electric charges, radio waves, and light pulses encoded by machines for machines, (human) vision has lost ground against other capabilities such as filtering, decrypting, and «apophenia» (Steyerl, 2016: 1) Apophenia, says Steyerl, is the perception of patterns within random data, which might be connected only through perceptual simultaneity (2). Her essay is based on the premise that, to the extent that «we are drowning in a sea of data» (generated and collected by technology), it has become essential to find patterns or intelligible shapes within this ocean. Deep learning experiments stem from this urgent search: machines are trained to see images emerge. Steyerl refers to the case of the Google Deep Dream project, which she characterizes as «pure and conscious apophenia» (2016: 9).

Google researchers have designed a training process that involves showing a neural network (which typically consists of 10 to 30 layers of artificial neurons) millions of images, and adjusting the parameters of the network until it can categorize the images according to the criteria of the research team. Each image is first introduced into the base layer, which then «talks» to the next one, until it reaches the layer that generates the output. At each point of this «conversation,» the network extracts more and more detailed information about the image; for instance, the base layer may be looking for edges and corners, the intermediate layers seek general forms or components («like a door or a leaf»), until the final layer interprets all the previous information and decides what the image is. These neural networks later apply their learning on pure noise, to identify faces and other patterns and classify the resulting images (Mordvintsex, Olah, & Tyka, 2015: web).

Once more, we can understand the process more clearly if we subvert its linear operation. For example, it is possible to know what constitutes a banana for a neural network if it is shown pure noise, and then this noise is tweaked until the network finds a banana in the image. This helps to know the limitations of the software, but it also shows that a number of preconditions are at work as the program decides if an image belongs in a category.

FIGURE 14
Figure 14. Source: Google Research Blog, Inceptionism: Going Deeper into Neural Networks

According to Steyerl, what neural networks «see» in the noise reveals the pre-established criteria under which they operate, their «preferences and ideologies» (2016: 9) Researchers (and this criticism has been made to image recognition software often) can transmit their own preferences and tendencies when educating the vision of an unprejudiced entity; this might be expressed, for instance, in the images chosen for the training. On the other hand, programs like Google Deep Dream also manifest the vices of machinic vision: over-identifying patterns or forcing the identification of faces and «useful» data where there is none. Steyerl also notes that, as software like GDD does not discern the context of the image, it ultimately identifies «a new totality of aesthetic and social relations» (9). Presets and stereotypes are applied to an image even if they have nothing in common, resulting in an over-interpretation. The example below illustrates this point: a network trained with animal pictures identifies animal-like shapes in the slightly cloudy sky, though obviously its interpretation does not apply.

FIGURE 15
Figure 15. Source: Google Research Blog, Inceptionism: Going Deeper into Neural Networks

All of this leads to an intriguing question: Is it productive to exploit these readings of the unreadable to the maximum, even if this leads to the fabrication of meaning?

5. The Possibility of the Knotted and the Knotting-To-Do

Illegibility is a complex concept, equally cultural and intimate, broad and intricate. It is applicable to text, image, code, and noise. It is abstract but tangible. It carries a stigma: it keeps knowledge away from us, generates errors, and hinders the execution of tasks. It also contains possibilities: a string of wool can tell the story of an Inca city, clouds can be animals or boats, or important discoveries may be hidden in countless communications intervened by state security agencies. It can also lead to fascinating over-interpretations of reality (think of discovering human faces on the moon), but those over-interpretations can be dangerous if they occur on personal data and lead to real-life consequences.

But illegibility is not only interesting for what it can reveal when (if) decrypted. We often interact with the illegible; it is no stranger to us. It might be productive to overcome any anxiety about the unreadable and accept the possibility of half-assed, unstable, even corrupt signs. Illegibility offers us ample possibilities for reading and creation because it is not restricted by an absolute and transparent signification –otherwise, what we could say about glitch art, with its lack of interest in solving errors? And what about the intimacy between sign and materiality in the Inca quipu, and its potential to embrace the “opacity” of emotion? Perhaps, by the standards of Western knowledge, «dark» semantics betrays reason and efficiency, but it resonates where reason does not apply and everything is not transparent. This reflection speaks to embracing illegibility as a possible discourse. In the words of poet, critic, and geologist Lisa Radon: “There is the logical, analytical web of connections, yes. But there is also weird touching, the connection of that which does not logically or historically connect, and this is the promise of poem, the promise of brain-crossing, errant hyperlinks…” (Radon, 2016: web).

Within the illegible, the sign is knotted onto itself, and entwined with the impressions, emotions and connections that it invites. At the same time, the poetics of illegibility lies in the semantic distance between the sign and its meaning. It is in the unraveling of the resulting knot that felt meanings emerge.

6. Works Cited

Applegate, Matt (2016). “GLî†CHÉD IN †RAN$LA†ION: Rèading †ex† and Codè as a Plaÿ of $pacés.Amodern 6: Reading the Illegible. <http://amodern.net/article/glitched-in-translation/>. (3-1-2017)

Barker, Tim (2011). «Aesthetics of the Error: Media Art, the Machine, the Unforeseen, and the Errant.» Ed. Mark Nunes. Error: Glitch, Noise, and Jam in New Media Cultures. New York: The Continuum International Publishing Group. pp. 42-59.

Mordvintsex, Alexander, Christopher Olah and Mike Tyka (2015). “Inceptionism: Going Deeper Into Neural Networks.” Google Research Blog <https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html>. (2-15-2017).

Piroux, Lorraine (2006). “The Encyclopedist and the Peruvian Princess: The Poetics of Illegibility in French Enlightenment Book Culture.” PMLA 121 (1): The History of the Book and the Idea of Literature. <https://www.jstor.org/stable/25486291>. (2-10-2017)

Radon, Lisa (2016). “Interview with Eleanor Ford”. Rhizome. <https://rhizome.org/editorial/2016/jul/20/artist-profile-lisa-radon/>. (3-5-2017)

Temkin, Daniel. “Entropy.” DanielTemkin.com <http://danieltemkin.com/Entropy/> (3-7-2017).

Urton, Gary (1998). “From Knots to Narratives: Reconstructing the Art of Historical Record Keeping in the Andes from Spanish Transcriptions of Inka Khipus.” Ethnohistory 45 (3). <http://www.jstor.org/stable/483319> (2-10-2017).

Van Stenis, Miyö (2016). “Internet Lovers♥ ♥ ♥ ♥ Naked Flux.” ecologies of intimacy. <https://ecologiesofintimacy.wordpress.com/2016/05/11/internet-lovers♥-♥-♥-♥-naked-flux-interview-with-miyo-van-stenis-and-michael-borras-aka-systaime/>. (22-2-2017).

Caracteres vol.6 n1

· Descargar el vol.6 nº1 de Caracteres como PDF.

· Descargar este texto como PDF.

· Regresar al índice de la edición web.

Caracteres. Estudios culturales y críticos de la esfera digital | ISSN: 2254-4496 | Salamanca