Topic-Specific Discussion of Principles of New Media in The Language of New Media

Automation – Third Principle of New Media

See page 32 in The Language of New Media

The Third Principle of New Media is introduced as follows:

“The numerical coding of media (principle 1) and the modular structure of a media object (principle 2) allow for the automation of many operations involved in media creation, manipulation and access. Thus human intentionality can be removed from the creative process, at least in part.”

Although it is certainly true that many aspects of the new media are facilitated by automated processes on computers, to identify such processes as central features or concerns of new media practices does little to clarify what aesthetic issues come into play when artists make use of the new media. To ground the discussion in what is really a material observation about the behavior of computers obscures more meaningful observations about how artists working with new media behave. It is not unlike discussing a painting in terms of how the paint dries.

The difficulty with Manovich’s approach here can be discerned in the consideration of how and why one might distinguish a new media art object made in part with automated processes on a computer from a painting made with pigments that are manufactured in automated factories; The Language of New Media contains no way to determine at what point the mediation of automated processes becomes sufficient to distinguish the new media from traditional media.

Just as it might seem odd to generalize about painters on the basis of how their pigments were manufactured, it seems odd to generalize about new media artists in terms of the automated processes they employ. While a particular artist might for some reason choose to take such processes as a thematic concern in an artwork, or adjust his or her style to the peculiarities of certain such processes, to generalize that such processes are of central concern to understanding how all other artists use a given medium would seem to generalize too much.

From the perspective of the artist, there is a certain convenience to be found in the ability of computers to automate certain types of tasks; yet in the context of new media, to identify this automation as central to the medium in a sense reduces the new media artist to a button-pusher: a consumer of automated processes rather than a creator of artworks. The effect of the assertion is similar to Truman Capote’s remark about what is perhaps Jack Kerouac’s most famous novel: “that’s not writing, that’s typing.” To diminish human intentionality in an analysis of new media is to diminish the fact that media come into use because they suit certain purposes.

It might well be argued, furthermore, that the way automation affects the behavior of the new media artist is to increase the role of human intentionality: because of the convenience with which many choices of sophisticated manipulations can be presented to a new media artist, there are more possibilities for the artist to deliberately reject. Automated processes on computers are also designed with a great deal of effort and intentionality, and there is a good deal of skill involved in learning how to make use of them – either as an artist or as a consumer.


Variability – Fourth Principle of New Media

See page 36 in The Language of New Media

In describing the Fourth Principle of New Media, Manovich observes that:

“A new media object is not something fixed once and for all, but something that can exist in different, potentially infinite versions.”

This observation would seem to relate more to the experience of somebody interacting with a new media object than to an artist creating a new media object; the implications for the artist are, however, relatively straightforward. A graphic designer working with a piece of graphic design software might be given some text and images, and might then try out a number of possible fonts for the text and visual arrangements of images. The text, during such a process, is not fixed, but highly variable in its appearance. Before the advent of computerized graphic design, such a design process was much more difficult.

It would seem that a large part of why the new media attract so much critical attention relates to the dynamic nature of online content. For example, in both design and distribution, visual text is no longer a static enterprise confined to the monolithic bound book, but has become a new sort of fluid event on computer screens: electronic text can easily be resized or rearranged. Yet the identification of this variability as a central feature of new media reveals at once a contemporary cultural bias towards that which is perceived as new, as well as the continuation of a historical trend that informs how, for example, the fluidity of electronic text ought to be perceived.

That the last quarter of the 20th Century brought with it some change in cultural attitudes towards mass media seems clear; that electronic computers continue to play some part in this change also seems clear. Something, then, is new; but to then say whatever properties are found in the new media are also new, or therefore fundamental to the perception of newness, is a deeply problematic approach. The problem might stem in part from the cultural value Modernism placed on novelty, but the perceived novelty of dynamic text (be it in terms of online syndicated or database-driven content, the market for branded plain-language neologisms such as “google,” or the proliferation of commonplace semantic conventions with plain-language vocabularies such as HTML or CSS or BBCODE), for example, is not strictly a recent cultural phenomenon. In thinking about why this cultural perception exists, it might be worthwhile to consider that the history of modern typography began with Gutenberg’s invention of movable type.

Among the early effects of Gutenberg’s movable type was a decrease in the cost of obtaining printed material, and an increase in the accessibility of printed material. Much of what we see in the effects of dynamic online content is in many respects similar: computers make it more convenient to access and manipulate media objects. To assert, then, that computers have introduced fundamentally new types of manipulations might reveal useful observations in a certain context, but the overall impact of computers in practical respects relates more directly to matters of convenience.

The discussion of new media’s variability, if it suffers from being too specific in its cultural scope, is perhaps too general in its technical analysis. In asserting that “instead of identical copies, a new media object typically gives rise to many different versions,” Manovich neglects one of the fundamental reasons for the utility of digital computers: be it in copying digital video from a camera to a computer, or in copying text from one computer to another, a contributing factor to the widespread success of digital computers has been their ability to make exact copies of things in a way that is impossible with many traditional media. A reproduction of a chemical photograph changes the image being reproduced because the reproduction introduces an additional amount of grain into the image; duplicating a digital picture file neither requires such a change in the product, nor do the economics of mass production and distribution imply greater costs for this increase in the accuracy of replicability.

It could be argued here that the replicability of new media objects encourages their modification, in virtue of the fact that such modifications to the product as the “customization” of a product’s use and behavior are made more convenient to the “audience” of end-users by digital computing (in virtue of the fact that products can be reproduced accurately enough to contain a great many reliable “moving parts” as well as a great degree of synchronous interoperability with other devices that similarly involve many “moving parts”); this convenience as a cultural value, however, would be an anthropological observation not directly addressed in the text. The variability of new media objects is an observation Manovich makes about the medium rather than about culture, and which he derives from his observations about the new media’s Numerical Representation and Modularity.


Transcoding – Fifth Principle of New Media

See page 45 in The Language of New Media

The Fifth Principle of New Media describes how:

“the logic of a computer can be expected to significantly influence the traditional cultural logic of media; that is, we may expect that the computer layer will affect the cultural layer.”

This Principle of New Media is the least well-defined, in part due to the unusual technical term used to name it, and in part for how it draws very general cultural considerations into what is otherwise primarily a discourse about the mechanical features of computers.

Transcoding is a technical term in computer science that relates, as Manovich notes, to the translation of information from one format to another; it is an important feature of this technical term, however, that the translation occurs within a computer system. Transcoding is the translation of information from one digital format to another; thus, printing a digital photograph onto paper does not qualify. The use of the word “transcoding” is unfortunate because it deprives readers of linguistic intuitions that might be derived from a more familiar term; the use of the word as a metaphor is also problematic, because culture is neither a “format” nor a product of the types of formal relationships that govern computer formats.

While it might be Manovich’s intent in this case to argue that our experience with computers colors how we view cultural activity — that computers make us see cultural activity in a more “computerized” sense — it is important to understand Manovich’s treatment of this term as an analogy, rather than a statement about formal equivalency, or one that implies a strong causal relationship. Similar analogies have arose in the past: after the invention of the mechanical clock, for example, it became popular in Western science to approach cosmology as though one were studying a clock-like mechanical device.

Although it is undoubtedly the case that computers have had some impact on culture, just how this effect is to be understood as substantially different from the technological impact of more traditional media is unclear. In terms of the linguistic consequences of media on culture, it is worth noting that following the widespread cultural acceptance of television and radio, for example, the English language gained a new colloquialism: “to tune out” what one finds uninteresting. The Sapir-Whorf Hypothesis in linguistics, which asserts that language plays a central role in what features of the world we can readily perceive, suggests that the emergence of such colloquialisms as “tuning out” might have consequences more profound than simply the availability of particular informal expressions. Even outside of a discussion about recording media, one can find in Christianity or Islam a mystical, cosmological significance attributed to the word.

Described as “a blend of human and computer meanings, of traditional ways in which human culture modeled the world and the computer’s own means of representing it,” the cultural effect of “transcoding” is understood as affecting “all cultural categories and concepts.”

Given that computers were designed under considerations of precisely the reflexive relationship Manovich here identifies, it should come as no surprise to discover such a relationship present in new media. Manovich’s description of transcoding, however, privileges the relationship as proceeding from computers to culture, and largely ignores the impetus behind the trend in the opposite direction. The discussion of “transcoding” is problematic insofar as it is held within the context of an analysis philosophically-grounded as though practical computers, in their design and use, could be meaningfully understood apart from the cultural attitudes, beliefs, goals, and habits that produced computers and made their presence commonplace.

In many important respects, computers are modeled on human physiology and the ways our physiology allow us to perceive the world. The RBG color model used by computers to represent images, for example, is successful at reproducing the colors we see in the world because it is modeled on how our physiology recognizes color. Similarly, much of the work that went into designing computers as formal systems derives from Gottlob Frege‘s study of natural language.

The distinction Manovich draws between “the computer layer” and “the cultural layer” may be part of an attempt to structure a dialectic relationship between the mechanical behavior of computers and the cultural uses for computers, wherein “new media” becomes a synthesis of “computers” and “culture.” In such a case, culture would seem to carry the connotation of something organic, while computers would carry the connotation of something artificial; such a dialectic, however, would presuppose an opposing relationship between computers and culture that really does not exist as presupposed.

It is generally assumed that because computers are human inventions governed by well-defined mechanical relationships, they can therefore be more fully understood than something like culture, in which we participate, but never deliberately invented. Despite the well-defined nature of practical computers, there are a number of programmatic difficulties in attempting to formulate a comprehensive theory of computation. The way Manovich relies upon concepts drawn from computer science involves many of these difficulties.

Brian Cantwell Smith, in his essay “The Foundations of Computing” wrote:

“What has been (indeed, by most people still is) called a ‘Theory of Computation’ is in fact a general theory of the physical world — specifically, a theory of how hard it is, and what is required, for patches of the world in one physical configuration to change into another physical configuration. It applies to all physical entities, not just to computers.

“Not only must an adequate account of computation include a theory of semantics; it must also include a theory of ontology… Computers turn out in the end to be rather like cars: objects of inestimable social and political importance, but not in and of themselves, qua themselves, the focus of enduring scientific or intellectual inquiry — not, as philosophers would say, natural kinds.

“It is not just that a theory of computation will not supply a theory of semantics… or that it will not replace a theory of semantics; or even that it will depend or rest on a theory of semantics… computers per se, as I have said, do not constitute a distinct, delineated subject matter.”

The main thrust of Smith’s argument is that the idea of an all-encompasing theory of computation may be as incoherent as an attempt to formulate an all-encompasing “theory of walking.” For Manovich then to ground his theory of new media in terminology from computer science, without carefully delineating in what possible domains his assertions are applicable, presents very fundamental difficulties to the use of The Language of New Media for making valid inferences about individual new media objects.