Discussion of Conclusions

Examples of Variability in New Media

See page 38 in The Language of New Media

Many of the specific examples provided in Lev Manovich’s discussion of new media’s variability suffer from an imprecision that leaves unclear just how the Principle of Variability ought to be properly applied when thinking about new media objects.

The example provided by “branching-type interactivity” overlooks historical continuities between new media and traditional media, while also suggesting philosophical difficulties. The word “branching” in this context has both a phenomenological meaning and a technical meaning; as a metaphor it relates to the way tree branches subdivide along their length, and describes the many possible routes one might take while navigating an interactive artwork (as though one were walking along a tree branch from a single trunk to a random leaf). In a technical sense, systems theory studies this phenomenon in terms of “bifurcation” as a way to describe the net effect of multiple individual events. The same “branching-type” behavior can be found in descriptions of interactions with traditional media objects such as books of photographs or other art prints, choose-your-own-adventure books, architecture, and installation art, all of which are commonly explored in a nonlinear and indeterminate fashion.

It could be argued that branching behavior is “in” a new media object in some structural way that it isn’t “in” traditional media; yet, just how one should most properly distinguish between the mechanical response of a book to having a page turned or a television set to having a channel changed, compared to a remote web server sending a copy of a web page, is unclear.

The example provided by “scaling” is similarly problematic. The word “scaling” has an informal sense, in which an object may be presented as larger or smaller, with more or less detail; and the word has a technical sense, which in mathematics refers to a type of linear transformation. The discussion of Microsoft Word’s “Autosummarize” feature fits neither of these uses: one third of a novel is not a scaled-down version of the novel, but rather, it is incomplete.

Although the types of variability discussed in The Language of New Media may be useful to an extent in describing the experience of interacting with a new media object, the discussion breaks down in a number of ways. Why these types of variability have the cultural value that they do is largely left unaddressed, and therefore, what meaning their application has to new media practices in terms of how new media objects are appreciated — aesthetically or in terms of convenience — remains unresolved.


Transcoding – Fifth Principle of New Media

See page 45 in The Language of New Media

The Fifth Principle of New Media describes how:

“the logic of a computer can be expected to significantly influence the traditional cultural logic of media; that is, we may expect that the computer layer will affect the cultural layer.”

This Principle of New Media is the least well-defined, in part due to the unusual technical term used to name it, and in part for how it draws very general cultural considerations into what is otherwise primarily a discourse about the mechanical features of computers.

Transcoding is a technical term in computer science that relates, as Manovich notes, to the translation of information from one format to another; it is an important feature of this technical term, however, that the translation occurs within a computer system. Transcoding is the translation of information from one digital format to another; thus, printing a digital photograph onto paper does not qualify. The use of the word “transcoding” is unfortunate because it deprives readers of linguistic intuitions that might be derived from a more familiar term; the use of the word as a metaphor is also problematic, because culture is neither a “format” nor a product of the types of formal relationships that govern computer formats.

While it might be Manovich’s intent in this case to argue that our experience with computers colors how we view cultural activity — that computers make us see cultural activity in a more “computerized” sense — it is important to understand Manovich’s treatment of this term as an analogy, rather than a statement about formal equivalency, or one that implies a strong causal relationship. Similar analogies have arose in the past: after the invention of the mechanical clock, for example, it became popular in Western science to approach cosmology as though one were studying a clock-like mechanical device.

Although it is undoubtedly the case that computers have had some impact on culture, just how this effect is to be understood as substantially different from the technological impact of more traditional media is unclear. In terms of the linguistic consequences of media on culture, it is worth noting that following the widespread cultural acceptance of television and radio, for example, the English language gained a new colloquialism: “to tune out” what one finds uninteresting. The Sapir-Whorf Hypothesis in linguistics, which asserts that language plays a central role in what features of the world we can readily perceive, suggests that the emergence of such colloquialisms as “tuning out” might have consequences more profound than simply the availability of particular informal expressions. Even outside of a discussion about recording media, one can find in Christianity or Islam a mystical, cosmological significance attributed to the word.

Described as “a blend of human and computer meanings, of traditional ways in which human culture modeled the world and the computer’s own means of representing it,” the cultural effect of “transcoding” is understood as affecting “all cultural categories and concepts.”

Given that computers were designed under considerations of precisely the reflexive relationship Manovich here identifies, it should come as no surprise to discover such a relationship present in new media. Manovich’s description of transcoding, however, privileges the relationship as proceeding from computers to culture, and largely ignores the impetus behind the trend in the opposite direction. The discussion of “transcoding” is problematic insofar as it is held within the context of an analysis philosophically-grounded as though practical computers, in their design and use, could be meaningfully understood apart from the cultural attitudes, beliefs, goals, and habits that produced computers and made their presence commonplace.

In many important respects, computers are modeled on human physiology and the ways our physiology allow us to perceive the world. The RBG color model used by computers to represent images, for example, is successful at reproducing the colors we see in the world because it is modeled on how our physiology recognizes color. Similarly, much of the work that went into designing computers as formal systems derives from Gottlob Frege‘s study of natural language.

The distinction Manovich draws between “the computer layer” and “the cultural layer” may be part of an attempt to structure a dialectic relationship between the mechanical behavior of computers and the cultural uses for computers, wherein “new media” becomes a synthesis of “computers” and “culture.” In such a case, culture would seem to carry the connotation of something organic, while computers would carry the connotation of something artificial; such a dialectic, however, would presuppose an opposing relationship between computers and culture that really does not exist as presupposed.

It is generally assumed that because computers are human inventions governed by well-defined mechanical relationships, they can therefore be more fully understood than something like culture, in which we participate, but never deliberately invented. Despite the well-defined nature of practical computers, there are a number of programmatic difficulties in attempting to formulate a comprehensive theory of computation. The way Manovich relies upon concepts drawn from computer science involves many of these difficulties.

Brian Cantwell Smith, in his essay “The Foundations of Computing” wrote:

“What has been (indeed, by most people still is) called a ‘Theory of Computation’ is in fact a general theory of the physical world — specifically, a theory of how hard it is, and what is required, for patches of the world in one physical configuration to change into another physical configuration. It applies to all physical entities, not just to computers.

“Not only must an adequate account of computation include a theory of semantics; it must also include a theory of ontology… Computers turn out in the end to be rather like cars: objects of inestimable social and political importance, but not in and of themselves, qua themselves, the focus of enduring scientific or intellectual inquiry — not, as philosophers would say, natural kinds.

“It is not just that a theory of computation will not supply a theory of semantics… or that it will not replace a theory of semantics; or even that it will depend or rest on a theory of semantics… computers per se, as I have said, do not constitute a distinct, delineated subject matter.”

The main thrust of Smith’s argument is that the idea of an all-encompasing theory of computation may be as incoherent as an attempt to formulate an all-encompasing “theory of walking.” For Manovich then to ground his theory of new media in terminology from computer science, without carefully delineating in what possible domains his assertions are applicable, presents very fundamental difficulties to the use of The Language of New Media for making valid inferences about individual new media objects.


Changing the Definition of Cinema

See page 300 in The Language of New Media

“The shift to digital media affects not just Hollywood, but filmmaking as a whole.  As traditional film technology is universally being replaced by digital technology, the logic of the filmmaking process is being redefined.”

 

While this observation certainly alludes to a process which many critics have observed — that digital technology exerts an influence on how films are produced by filmmakers and understood by audiences — the formulation and exposition of this observation here relies on generalities that omit many practical considerations of great importance.

The difficulty is a product of a methodological problem with Manovich’s text — that is, he begins with a reductionist approach to analyzing new media, then reasons through the potential consequences of his premises.  The problem with such an approach is that it relies entirely on the validity of the premises.  In this case, the premises are not only flawed, but in their focus on the purported “concrete” factors that distinguish new media objects from traditional media, they gloss over many cultural continuities.

The result of this difficulty is a series of observations that are equally disconnected from the history and the present reality of filmmaking.

Take, for example, Manovich’s assertion that the result of 3D computer animation is that “live-action footage is displaced from its role as the only possible material from which a film can be constructed.”  The thrust of Manovich’s assertion here is to emphasize the “newness” and the “otherness” of digital cinema; yet he does so as the expense of accuracy.

The history of cinema provides numerous examples that demonstrate the utter falsity of this claim.  Early in the history of cinema — before the Hollywood studio system came to exert a dominant influence on the aesthetics of film — we can see filmmakers such as Man Ray conducting experiments to produce moving images on celluloid without relying upon the photographic apparatus of the camera.  Man Ray’s 1926 film Emak Bakia contains several sequences produced by placing various objects directly on a strip of film and then exposing the film to light.  Aside from Hollywood’s rich history of cell-based animation (which clearly does not rely on live-action footage) we find contemporary experimental filmmakers such as Stan Brakhage — who made numerous films by directly applying pigment to clear leader, or scratching the emulsion off of black slug — continuing the investigations begun by Man Ray.  Furthermore, it is worth considering that scientific time-lapse footage, such as of microbes or phototropism, do not qualify as live-action imagery (although they are examples of photographic motion pictures).