The Eviction of the Human from Human Interest: The Case of Mechanically Generated Text and Textual Analysis

El desahucio de lo humano en los intereses humanos: el caso de los textos automatizados y el análisis textual

Adrian Nathan West (Asymptote Journal)

Artículo recibido: 13-03-2013 | Artículo aceptado: 08-05-2012

ABSTRACT: In recent years, automation has encroached upon “soft knowledge” fields long considered the exclusive preserve of human agents, particularly in the production and analysis of texts framed in natural language. Like most technological innovations, automation has been embraced with minimal skepticism: mainstream voices have assumed that new technologies, while changing the type of work available, will continue creating new jobs to replace those it renders obsolete, and harsh criticism has been confined mainly to the ideological fringes. There is reason to believe that this optimism is unjustified with respect to the automation of intellectual labor, which may prove to have pernicious consequences both for the market economy and for human values that yield only poorly to abstract calculation.
RESUMEN: En los últimos años, la automatización ha invadido zonas del “conocimiento blando” tradicionalmente consideradas como de dominio exclusivo para agentes humanos, sobre todo en cuanto a la producción y análisis de textos enmarcados en el lenguaje natural. Como en la mayoría de las innovaciones tecnológicas, la automatización ha sido adoptada con escaso escepticismo: las corrientes generales han asumido que las nuevas tecnologías, pese a que cambian el tipo de trabajo disponible, seguirán creando nuevos empleos que sustituirán a los obsoletos y las críticas duras se han visto restringidas principalmente a márgenes ideológicos. Hay razones para creer que este optimismo en torno a la automatización del trabajo intelectual no está justificado, lo que podría tener consecuencias perniciosas tanto para la economía de mercado como para los valores humanos que no rinden bien bajo el cálculo abstracto.

KEYWORDS: natural language, computer language, boolean logic, semantics, artificial intelligence
PALABRAS CLAVE: lenguaje natural, lenguaje computacional, lógica booleana, semántica, inteligencia artificial

____________________________

1. In a certain way, the history of the progress of human knowledge can be seen as the supersession of the intuitive by the quantitative and of the slow sloughing-off of the epistemological systems that have favored the former at the expense of the latter.  This process has not been univalent.  Account must be taken of both the fundamental importance of the Eureka or Aha effect in artistic and scientific progress and of the ease with which the misapplication of methodologies of a declaredly scientific character can contribute to catastrophic delusions that a more commonsense approach may have avoided, viz. the untrammeled multiplication of the nominal value of subprime mortgage derivatives in the lead-up to the 2008 financial crisis, now recognized to be in part the responsibility of flawed risk-assessments delivered by algorithms of the sort now dictating upwards of 70% of trade volume on Wall Street (Perkins, 2000: 3-24; Dodson, 1998; Salmon and Stokes, 2011).  Still, it is unarguable that quantitative approaches to problem-solving have a significant forensic advantage:  to a great degree, their steps can be retraced, errors recognized, and corrections made, whereas in the case of creative endeavors that up to the present have been dominated by intuition—art and literature, criticism, or diplomacy, for example—there is scant evidence of progress’s having been made, or even of the possibility of such progress, and for this reason, whereas the works of Paracelsus are a curiosity for the modern chemist, Plato and Erasmus are still deemed at least as relevant as Žižek or Jonathan Culler[1].

While the line of demarcation between intuitive and quantitative methods of problem-solving has shifted throughout human history, notably in the great upheavals of the scientific revolution, a measure of stability seems to have obtained from the time of Darwin to the dawn of the twenty-first century regarding these two approaches and their proper domains, popularly described as science and culture.  It is my contention in the present article that the science/culture distinction famously proffered T.H. Huxley and C.P. Snow, among others, represents not a robust conceptual distinction, but rather a vague restatement of the shopworn dichotomies of body/soul, art/craft, spiritual/physical, and perhaps even mind/computer, that advancements in such fields as informatics and cognitive science are on the verge of rendering obsolete (Huxley, 1881: 1-23; Snow, 1964: 1-44).

If this thesis is correct, a number of questions become pressing:  does there exist a domain of pure culture to which science is by definition barred access?  If not, is the artificial handicapping of science desirable or even possible?  What are the ultimate costs of untrammeled efficacy?  Do trial and error have an existential value irreducible to mere utility?

Given that even a cursory examination of the inroads made by technical sciences into fields as disparate as psychology, customer service, esthetics, and sports recruiting would be impossible here, I will narrow my focus to two spheres widely considered exclusively human, dependent for their vitality on intuition and resistant to quantification:  the production and analysis of natural language.

2.  Early approaches to the computerization of natural languages were remarkable at once for their pessimism and guilelessness.  Whereas the popular mind readily accepted the idea of robots endowed with sense-organs and self-referentiality, albeit of a standoffish sort, both in science fiction as well as in exaggerated news accounts of the reach of artificial cognition, Hubert Dreyfus, in his famous and influential What Computers Can’t Do, in 1972 was already asserting that “the boundary may be near” with respect to computers’ problem-solving capacities (xxvii-xxix; 139).  The popular mind seems easily to have made the transition from superstitious belief in the mystic powers of occult entities to a similarly shrouded faith in the omnipotence of science (Stenmark, 1997: 15; 17; 29-30).  The objections of Dreyfus arise not only from a failure to see the contributions lateral thinking would make to the progress of informatics, whereby simple solutions to problems in artificial intelligence have often proven more robust than their complex counterparts, but also a radical underestimation of the advances in processing power and storage that would be arrived at in the decades following his book’s release.  In examining the possibilities for computerized language, for example, Drefyus repeatedly invokes the limits of data storage and retrieval with what appears almost endearing naivety today, when Google’s Ngram initiative disposes of a corpus of over one trillion words in numerous languages (Dreyfus, 1972: 49, 129, 193; Zimmer, 2012).

Since the publication of Dreyfus’s book, enormous advances have been made.  Mobile natural-language interfaces like Siri and Google Voice transcribe information delivered with natural speed and modulation and respond to complex commands including making a restaurant reservation or sending an email to cancel an appointment, and can be trained to offer context-appropriate information when these commands cannot be executed.  Further, just as cloud computing has relieved personal computing devices of the need to store the immense amounts of data and processing power necessary for translation and other nuanced natural language-based tasks, it has also given rise to a situation in which billions of users are constantly relaying data about the patterns governing how they read, talk, travel, and purchase, and these data are utilized in the construction of ever-more subtle algorithms mapping human behavior (Morphy, 2010; Kadushin, 2012: 196-198).

The challenges of computational linguistics relate broadly to two categories:  the analysis and the production of specimens of natural language (Grishman, 1986: 8).  The following examples give some idea of the current state of progress therein.  Advances in analysis range from index-oriented programs of the sort first applied to the poetic analysis of rhythm and meter by Harry and Grace Logan in the late 1970s and the word-frequency technologies used by Hugh Craig and Arthur Kinney to establish the disputed provenance of writings by Shakespeare and Marlowe to more robust hardware like IBM’s Watson, which processed ASCII files of questions posed in natural language, many employing puns, metaphors, and other types of ambiguity, to best erstwhile champions of the popular game show Jeopardy in a string of matches in 2011 (Logan; Craig and Kinney, 2009: 15-40; Jackson, 2011).  Against the objection that these achievements represent the mere parsing of dry facts and not the approximation of “soft knowledge” widely considered to be an exclusive property of human beings, one should consider, among other things, the work of Kelley Conway on voice-pattern recognition and personality classification, which has been used to streamline customer service interactions and thwart phishing scams, or, more apropos, Jürgen Schmidhuber’s theory of the low-complexity artwork, which attempts to describe the simple algorithmic underpinnings of subjective beauty, and which he is working to expand, via the concept of developmental robotics, into self-teaching, self-motivated machines capable of independent artistic production and scientific problem-solving (Steiner, 2012: 118-122; Schmidhuber, 1997: 97-103; 2006: 173-187).  As concerns the production of natural languages, innovations range from the amusing, such as the Proppian folktale generators concocted at Brown University in the late 1990s, to the uncanny—a particularly germane example of the latter being the products of Narrative Science, a company that provides computer-written news articles to organizations including Forbes and the Big Ten Network, a prominent sports broadcaster (Krajeski, 2009; Lohr, 2011). Its output, far from clunky and wayward, mimics a conversational tone and is effectively indistinguishable from that of an ordinary human journalist. The founders of Narrative Science, Kris Hammond and Larry Birnbaum, co-directors of Northwestern University’s Intelligent Information Laboratory, have made the claim that a computer program using their software will win a Pulitzer Prize for journalism within the next five years (Lohr, 2011).

Perplexingly, Hammond and Birnbaum have shrugged off the obvious threat to career journalists that their software implies, saying it will serve to augment the offerings of firms with tight editorial budgets rather than infiltrate larger journalistic concerns (Lohr, 2011).  Their sunny predictions fall in line with those of Stephen Ramsay, whose recent book Reading Machines stresses the liberatory aspects of algorithmic criticism and its possibilities for augmenting and enriching older forms of hermeneutics (16-17).  In contrast to their optimism, Christopher Steiner, author of a widely read overview of the expanding role of algorithmic approaches to markets and human psychology, gives this blunt, and perhaps more realistic, assessment: “The ability to create algorithms that imitate, better, and eventually replace humans is the paramount skill of the next one hundred years” (17).

It may be salutary to recall that whereas narratives concerning technology tend toward the impossibly rosy or the outlandishly dystopian—both tendencies undoubtedly reflecting the ineptitude of human psychology with respect to predictioneering and futurology, two fields where algorithmic approaches have made startling strides[2] —the drastic shifts in modes of production with which we are most familiar, namely the Second and Third Industrial Revolutions, were accompanied by stagnation or reduction of the living standards of the lower reaches of society as well as the creation of upper classes disposing of previously inconceivable wealth (Lindert, 2000 12-13; 18-24; More, 2000: 139-147; Atkinson, 2011: 3-7).  It remains to be seen whether measures of the kind that assuaged the conditions of the disenfranchised in the welfare states of the mid-twentieth century, many of which have been dismantled in pursuit of fiscal austerity, will return in some form; but it is in any case true that the so-called Luddite fallacy[3], like the idea of the Malthusian Catastrophe, cannot be called wrong simply because it has not yet come to pass, and that the idea, not that there are some things humans will always do better than robots and computers, but rather that there are enough of them to support full employment of a planetary population rapidly approaching seven billion, appears increasingly naïve.

Pessimism about the capacities of machines to intrude upon the fields of soft knowledge, a term ordinarily construed to include our stated themes of text production and analysis, has tended to rely on the idea that machines process but do not understand; that, in the words of John Searle, inventor of the famous Chinese Room[4] argument and tenacious critic of the possibilities of mechanized reasoning, that “syntax is not semantics” (Searle, 1980: 418-423; Searle, 2009).  Searle’s objections are based in a number of misconceptions.  First, it must be averred that in many cases, human syntactic thought also lacks a graspable semantic content:  a human actor using a mathematical table to attain a result has more in common with a computer than a person engaged in unaided contemplation; the same may be said of a person repeating a cliché.  Further, it has proven difficult or perhaps impossible for philosophers to establish a robust concept of semantic meaning or to demarcate its presence or absence in given instances of expressed thought, to the point that some have questioned whether the notion of semanticity should not be dispensed with all together (Gauker, 2003: 98, 114, 192-127).  Finally, it imagines semantic meaning as a communal experience, as a transmission of definitive, meaning-rich concepts from one mind to another, ignoring the fundamental role of reception in the establishment of semantic meaning.  As Brian Boyd notes in his study of the evolutionary basis on narrative construction, “humans overdetect agency… And we will interpret something as a story if we can.”  This has been shown to be the case even among test subjects asked to describe the permutations of randomly generated geometric figures (Boyd, 2009: 137-9).

3.  As early as 1666, Gottfried von Leibniz dreamt of a numeric language that would reduce the intractable ambiguities of natural speech to series of bifurcations represented by the numbers zero and one.  In the nineteenth century, George Boole elaborated his analogous intuitions into the logic that bears his name, which forms the basis for modern computing.  Yet Boole did not view logic as an abstract system coincidentally apt for the construction of calculating machines, but rather as the underpinning of thought itself. (Boole , 1854: 1-16; 311-328).

It is not yet clear whether the ambiguity of human thought and behavior is an irreducible property or whether it can be made to yield to the and-, or-, and not- functions that Claude Shannon wedded to Boolean logic, thereby enabling modern computing; it is unlikely that the textured urgency of human thought, with its unbreachable connection to care and embodiment, can be a property of a machine; to this extent, the cynicism of Dreyfus and Searle is justified (Steiner, 2012: 73-74).  Yet none of this suffices to say that computers will not one day compose poems as affecting as those of Wordsworth, or generate criticism as arch and original as that of Karl Kraus.  In the field of music, it may be claimed with some justification that they have achieved perfection (Steiner, 2012: 89-102).  If Wittgenstein is right to say “Everything that can be thought at all can be thought clearly,” and qualities like wit, appositeness, and depth of feeling are not in essence ethereal, but rather subject to definition and analysis, then it is possible to render them in numeric code and reproduce them artificially; to do so, one need not endue a program with spirit, as opponents of Artificial Intelligence presume; all that is required is that the assumptions underlying the program be accurate, and its architecture of sufficient flexibility to take account of possible variables (Wittgenstein, 1922: 53).

4.  When considering a world where computers may be responsible for both the production and analysis of text, it is reasonable to ask what, if any, essential relationship obtains between humanity and these two activities.  Insofar as the entelechy that compels human endeavor can be described as of a natural sort, in contrast to the deliberateness which until the present has defined this property in computers, it may be safely said that the vital relationship between human life and the written word will not vanish entirely.  There is no guarantee, however, against its suffering an attenuation of a profoundly detrimental character. The advent of machined goods led to an loss of manual ingenuity; print to a debasement of the art of memory, which the constant accessibility of internet databases now threatens to eviscerate; in fields such as gastronomy and oenology, decisions once distributed among a broad range of participants, made according to the dictates of climate, custom, inherited knowledge, and personal idiosyncrasy are now allotted to the Research and Development sectors of an ever-smaller number of companies that depend increasingly on technology and automation (Bohannon, 2011: 277; Wallace and Kalleberg, 1982: 307-324; Patterson, 2011).  According to the logic of technology, which is also the technology of markets, there is no justification for the persistence of human agents in activities more cheaply or precisely performed by machines.

Whatever one’s ideological orientation, the progress of automated decision-making cannot be viewed as class-neutral.  To the extent that it subverts those who sustain their economic wellbeing through the cultivation of knowledge or skills, it favors their economic dispossession, and is likely to encourage developments reminiscent of the rentier capitalism[5] widely decried among left-leaning economists (Harvey, 2003: 186-187).  For automation, having no claim to the profits its labor generates, stands, in terms of its advantages for owners, in the same relation to the wage-earner as a slave, with the added benefit that its constant exploitation evokes no pangs of conscience, and the costs of maintaining it are much lower. This leads, apparently inevitably, to the so-called problem of effective demand:  the inability of capitalism to sustain itself by resort to markets composed of laborers from whom surplus value must be exacted in order to render a commodity profitable.  As David Harvey notes, capitalist states have for the most part failed to respond to this contradiction through an expansion of social justice programs that supervene on the primacy of the market; instead, they have resorted to stop-gaps:  the offshoring of labor, the opening up of emerging markets, the privatization of state assets, the credit bubble, etc.  In Harvey’s words, “The fundamental theoretical conclusion is: capital never solves its crisis tendencies, it merely moves them around” (Harvey, 2010).

Whether the automation of labor will represent a terminal crisis point or another hiccup in the onward drive of growth-oriented market capitalism is inconclusive; but it may have other, less tangible but more ominous implications for human consciousness.  Automation strives for speed and precision; human beings work at a variable pace, and their results are inconsistent.  Is the unquestioning substitution of the former for the latter compatible with the ends of human life?  It is possible that the texture of existence, its bittersweetness, is indistinguishable from the heuristic value of error, failure, and uncertainty; that frailty and inefficiency comprise occasions for economically irrational but very deep human values, the defense of which against the mounting passivity that appears to be the hallmark of the digital revolution is worthwhile, if not imperative.   This seems to be an indication of current research into the nature of happiness, intelligence, and life-satisfaction, which has emphasized the deleterious effects of passivity on mental health and self-image (Howell et al., 2011: 1-15; Robinson and Martin, 2008: 569-571).

Works Cited

Atkinson, Anthony B. (1999). Is Rising Income Inequality Inevitable: A Critique of the Transatlantic Consensus.  Helsinki:  UNU/WIDER.

Bohannon, John (2011, 15 July). “Searching for the Google Effect on People’s Memory”.  Science 333 (6040): pp. 277.

Boole, George (1854).  The Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probability.  Cambrige:  Macmillan & co. Also:  Project Gutenberg.  <http://gutenberg.org/ebooks/15114>. (28-3-2013).

Boyd, Brian (2009).  On the Origin of Stories:  Evolution, Cognition, and Fiction.  Cambridge, Mass:  Belknap/HUP.

Bueno de Mesquita, Bruce (2009).  The Predictioneer’s Game:  Using the Logic of Brazen Self-Interest to Predict and Shape the Future.  New York:  Random House.

Cohen, Martin (2007).  101 Philosophy Problems, 3rd Edition.  Abingdon:  Routledge.

Dodson, Sean (1998, 15 October).  “Was Software Responsible for the Financial Crisis?”.  The Guardian. <http://www.guardian.co.uk/technology/2008/oct/16/computing-software-financial-crisis>. (23-4-2013).

Dreyfus, Hubert (1972).  What Computers Can’t Do:  A Critique of Artificial Reason.  New York:  Harper and Rowe.

Ford, Martin (2009).  The Lights in the Tunnel. <http://www.thelightsinthetunnel.com>.(23-4-2013).

Gauker, Christopher (2003).  Words Without Meaning.  Cambridge, Mass:  MIT Press.

Gay, Volney (2009).  Progress and Values in the Humanities: Comparing Culture and Science.  New York:  Columbia University Press.

Grishman, Ralph (1986).  Computational Linguistics:  An Introduction.  Cambridge:  Cambridge University Press.

Harvey, David (2010, 16 August).  “The Enigma of Capital and the Crisis This Time”.  DavidHarvey.org  <http://davidharvey.org/2010/08/the-enigma-of-capital-and-the-crisis-this-time/> . (28-4-2013).

Harvey, David (2003).  The New Imperialism.  Oxford:  Oxford University Press.

Howell, Ryan T., David Chanot, Graham Hill and Colleen J. Howell (2011).  “Momentary Happiness:  The Role of Psychological Need Satisfaction”.  Journal of Happiness Studies 12: pp.  1-15.

Huxley, Thomas Henry (1881).  Science and Culture, and Other Essays.  London and New York:  Macmillan. Also: Google Books.  Web (27-4-2012).

Jackson, Joab. (2011, 16 February). “IBM Watson Vanquishes Human Jeopardy Foes”.  PC World. <http://www.pcworld.com/article/219893/ibm_watson_vanquishes_human_jeopardy_foes.html>. (23-4-2013).

Kadushin, Charles. (2012).  Understanding Social Networks:  Theories, Concepts, and Findings.  Oxford:  Oxford University Press.

Kraig, Hugh and Arthur Kinney (2009).  Shakespeare, Computers, and the Mystery of Authorship.  Cambridge:  Cambride University Press.

Krajeski, Jenna (2009, 5 January).  “Once Upon a Time 2.0”. The New Yorker.  <http://www.newyorker.com/online/blogs/books/2009/01/fairytale-20.html>. (27-4-2013).

Lindert, Peter H. (2000).  “When Did Inequality Rise in Britain and America?”.  Journal of Income Distribution 9 (1): pp. 11-22.

Lohr, Steve (2011, 10 September).  “In Case You Wondered, a Real Human Wrote this Article”.  The New York Times.  Web.  (27-4-2012).

More, Charles (2000).  Understanding the Industrial Revolution.  London:  Routledge.

Morphy, Erika (2010, 1 January).  “Creepy Ways Your Social Media Data Can Be Used”.  Tech News World. <http://www.technewsworld.com/story/69158.html>. (23-4-2013).

Patterson, Tim (2011, February).  “Do We Still Need Winemakers?”.  Wines and Vines. <http://www.winesandvines.com/template.cfm?section=columns_article&content=83178&columns_id=24>. (29-4-2013).

Perkins, David (2000).  The Eureka Effect:  The Art and Science of Breakthrough Thinking.  New York:  W.W. Norton & co.

Ramsay, Stephen (2011).  Reading Machines:  Toward an Algorithmic Criticism.  Urbana-Champagne:  University of Illinois Press.

Rifkin, Jeremy (1995).  The End of Work.  New York:  Putman.

Robinson, John P. and Stephen Martin (2008). «What Do Happy People Do?». Social Indicators Research 89 (3): pp. 565-571.

Salmon, Felix and John Stokes (2011, January).  “Algorithms Take Control of Wall Street”.  Wired.  <http://www.wired.com/magazine/2010/12/ff_ai_flashtrading/>. (23-4-2013).

Schmidhuber, Jürgen (1997).  “Low-Complexity Art”.  Leonardo, Journal of the International Soceity for the Arts, Science, and Technology 30 (2): pp. 97-103.

Schmidhuber, Jürgen (2006).  “Developmental Robotics, Optimal Artificial Curiosity, Creativity, Music, and the Fine Arts.”  Connection Science 18 (2): pp. 173-187.

Searle, John R (2009, 15 March).  “Machines Like Us interviews Paul Almond”.  Machines Like Us. <http://machineslikeus.com/machines-like-us-interviews-paul-almond.html>. (10-3-2013).

Searle, John. R. (1980). “Minds, brains, and programs”.  Behavioral and Brain Sciences 3 (3): pp. 417-457.

Silver, Nate (2012).  The Signal and the Noise:  Why Most Predictions Fail but Some Don’t.  New York:  Penguin.

Snow, C. P. (1964).  The Two Cultures.  Cambridge:  Cambridge University Press, 1998.

Steiner, Christopher (2012).  Automate This:  How Algorithms Came to Rule our World.  New York:  Penguin.

Stenmark, Mikael (1997).  “What is Scientism?”.  Religious Studies 33: pp. 15-32.

Wallace, Michael and Arne L. Kallenberg (1982, June).  “Industrial Transformation and the Decline of Craft:  The Decomposition of Skill in the Printing Industry”.  American Sociological Review 47 (3): pp. 307-324.

Wittgenstein, Ludwig (1922).  Tractatus Logico-Philosophicus.  New York:  Barnes and Noble, 2003.

Zimmer, Ben (2012, 18 October).  “Bigger, Better Google Ngrams: Brace Yourself for the Power of Grammar”.  The Atlantic.  <http://www.theatlantic.com/technology/archive/2012/10/bigger-better-google-ngrams-brace-yourself-for-the-power-of-grammar/263487/>. (23-4-2013).

 

Caracteres vol.2 n.1

· Descargar el vol.2 nº1 de Caracteres como PDF.

· Descargar este texto como PDF.

· Regresar al índice de la edición web.

Notas:    (↵ regresa al texto)

  1. For an entertaining examination of perennial and perhaps irresolvable dilemmas in philosophy, see Martin Cohen’s 101 Philosophy Problems; for a more sustained treatment of the possibility of progress in the humanities, Volney Gay’s Progress and Values in the Humanities: Comparing Culture and Science is a serviceable introduction.
  2. The term predictioneering is associated with the game theorist Bruce Bueno de Mesquita, widely credited with predicting the Second Intifada, the post Tienamen Square crackdown on dissidents, and other major political events; it has also been used to describe the work of Nate Silver, the former baseball statistician who rose to fame in 2012 for his nearly flawless state-by-state predictions as to the outcome of the United States’ 2012 presidential race.
  3. The term “Luddite Fallacy” describes the belief that technological progress is a cause of systemic unemployment. It is derided among many contemporary economists, although the belief that the current wave of technical innovation might differ qualitatively from those of the past, and that automation may eventually provoke a bona fide labor crisis, has gained some ground. For further clarification, see Martin Ford’s The Lights in the Tunnel and Jeremy Rifkin’s The End of Work.
  4. The Chinese Room is a thought experiment proposed by John Searle as an analogy to artificial intelligence. Searle imagines an English speaker with no understanding of Chinese confined to a room with a set of instructions for combining Chinese symbols to provide appropriate answers to written questions posed to him from outside the room by a Chinese speaker. Following the program, Searle says, the English speaker could produce correctly formed utterances in Chinese; yet his ability to do so would not signify a knowledge of Chinese. Searle’s argument questions the validity of the famed Turing test, which ascribes a measure of artificial intelligence to machines capable of producing utterances in natural language that cannot be distinguished from those produced by human beings.
  5. Rentier capitalism signifies the reaping of profit from rents derived from ownership as opposed to enterprise.

Caracteres. Estudios culturales y críticos de la esfera digital | ISSN: 2254-4496 | Salamanca