Parsing the Languages of the New Media » History http://media.frametheweb.com A critical examination of Lev Manovich's Language of New Media. Fri, 03 Jun 2011 00:35:46 +0000 en-US hourly 1 http://wordpress.org/?v=3.4.2 Addressing The Myth of Random Access http://media.frametheweb.com/2009/10/23/addressing-the-myth-of-random-access/ http://media.frametheweb.com/2009/10/23/addressing-the-myth-of-random-access/#comments Fri, 23 Oct 2009 17:24:34 +0000 David Witzling http://media.frametheweb.com/2009/10/23/addressing-the-myth-of-random-access/ After Manovich enumerates his Five Principles of New Media, he proceeds to address a number of “popularly held notions about the difference between new media and old media.”  He seeks to discredit these notions as insufficient to distinguish new media from traditional media insofar as they are “not unique to new media, but can be found in older media technologies as well.”

The third such notion that Manovich addresses is the ability of new media to support the “random access” of information.  Manovich formulates this popular notion as follows:

“New media allow for random access.  In contrast to film or videotape, which store data sequentially, computer storage devices make it possible to access any data element equally fast.”

The background of this claim involves an important technical concept in contemporary computer design.  When people talk about how much “memory” their computer has, they frequently refer to Random Access Memory (often called RAM), as opposed to hard disk space.  Hard disk space is where computer programs and user files are kept for long-term storage; RAM is like a scratch pad that a computer uses while manipulating data retrieved from a hard disk.  Although today, hard disks are quite fast, this distinction represents an important development in the history of computing, as data was once stored on magnetic tape.  Historically, for a computer to access a given file, it needed to fast-forward or rewind a large amount of tape in order to retrieve the data requested by a user.  As this could be a time-consuming process, solid state RAM technology was developed in part so that once the relevant data was located on the tape, that data could then be manipulated with less delay.  Cost was among the primary reasons that solid state RAM was not used for all storage.

Thus, distinctions between new media and old media based on the concept of “random access” would draw an analogy between film and the magnetic tape that computers once commonly used to store data.  To view a particular frame in a given film, one must feed the film through a projector and first view every preceding frame — even though one may not be interested in all those other frames.  Under this analogy, new media represent the invention of RAM, and therefore the ability to easily retrieve an arbitrary piece of information without having to review a mountain of unimportant data first.

This analogy is meant to draw attention to an important functional distinction between new media and old media.  The perceived relevance of this distinction is that new media facilitate a unique form of non-linear interaction which was impossible to achieve with previous media technologies such as film.  As Manovich here observes, “once a film is digitized and loaded into a computer’s memory, any frame can be accessed with equal ease.”  Thus, new media are purported to provide unique ways of fragmenting and re-ordering time and space.

Although Manovich (perhaps correctly) denies that “random access” is sufficient to create a unique identity for new media, his argument here fails to directly address why.  One of Manovich’s central objectives in The Language of New Media is to ground the emerging conventions of new media in an existing body of literature on film criticism; to this end, his argument against the relevance of “random access” relies on various precedents in the history of cinema — such as the Phenakistiscope and the Zoopraxiscope — that demonstrate a rudimentary form of “random access” in non-digital motion picture technology.  However, the ability to scan through time “mapped onto two-dimensional space” found in esoteric technologies such as the Phenakistiscope is not really present in cinema as we understand it today; thus, the critique which Manovich would here present is more an observation of a historical coincidence than it is an observation about how the development of cultural and artistic traditions contributed to certain ways of perceiving and ordering experience.

If we abandon Manovich’s assumption that the history of cinema provides the most appropriate way to interpret the emergence of new media aesthetics, we can then see that other, less esoteric examples of “random access” in pre-existing media forms not only make themselves apparent, but also provide more compelling evidence to substantiate claims against the purported importance of “random access.”

Most obviously, the printed book is a traditional media form that readily supports “random access.”  The ability to access arbitrary information in a printed book is what gives reference texts such as dictionaries and encyclopedias their utility.  Moreover, the liturgical function of the Bible relies on the ability to retrieve specific passages — and those passages only — on different occasions.  If we consider that printed books can contain material other than text, we are obliged to acknowledge that books of art reproductions and magazines are frequently browsed in highly non-linear ways.  The “choose-your-own-adventure” book is just one example of the ways in which the non-linear, “random access” features of print can be exploited for aesthetic ends.

In “old media” such as painting and photography, analyses which oppose the purported linearity of traditional media forms to the non-linearity of new media break down.  It makes little sense to speak of an individual painting or a photograph as sequential, since any part of the image can be viewed at any time without first having to review large amounts of irrelevant information.  The cultural values that give rise to different modes of expression or the widespread acceptance of certain modes of expression are more nuanced than presented by Manovich.  While recent history may present researchers with an entertaining menagerie of technological oddities and curious names, there are broader cultural trends that in many cases do better to illustrate the social and cultural dynamics influencing the development of different media.

]]>
http://media.frametheweb.com/2009/10/23/addressing-the-myth-of-random-access/feed/ 0
Changing the Definition of Cinema http://media.frametheweb.com/2009/10/18/changing-the-definition-of-cinema/ http://media.frametheweb.com/2009/10/18/changing-the-definition-of-cinema/#comments Sun, 18 Oct 2009 15:46:23 +0000 David Witzling http://media.frametheweb.com/2009/10/18/changing-the-definition-of-cinema/

“The shift to digital media affects not just Hollywood, but filmmaking as a whole.  As traditional film technology is universally being replaced by digital technology, the logic of the filmmaking process is being redefined.”

 

While this observation certainly alludes to a process which many critics have observed — that digital technology exerts an influence on how films are produced by filmmakers and understood by audiences — the formulation and exposition of this observation here relies on generalities that omit many practical considerations of great importance.

The difficulty is a product of a methodological problem with Manovich’s text — that is, he begins with a reductionist approach to analyzing new media, then reasons through the potential consequences of his premises.  The problem with such an approach is that it relies entirely on the validity of the premises.  In this case, the premises are not only flawed, but in their focus on the purported “concrete” factors that distinguish new media objects from traditional media, they gloss over many cultural continuities.

The result of this difficulty is a series of observations that are equally disconnected from the history and the present reality of filmmaking.

Take, for example, Manovich’s assertion that the result of 3D computer animation is that “live-action footage is displaced from its role as the only possible material from which a film can be constructed.”  The thrust of Manovich’s assertion here is to emphasize the “newness” and the “otherness” of digital cinema; yet he does so as the expense of accuracy.

The history of cinema provides numerous examples that demonstrate the utter falsity of this claim.  Early in the history of cinema — before the Hollywood studio system came to exert a dominant influence on the aesthetics of film — we can see filmmakers such as Man Ray conducting experiments to produce moving images on celluloid without relying upon the photographic apparatus of the camera.  Man Ray’s 1926 film Emak Bakia contains several sequences produced by placing various objects directly on a strip of film and then exposing the film to light.  Aside from Hollywood’s rich history of cell-based animation (which clearly does not rely on live-action footage) we find contemporary experimental filmmakers such as Stan Brakhage — who made numerous films by directly applying pigment to clear leader, or scratching the emulsion off of black slug — continuing the investigations begun by Man Ray.  Furthermore, it is worth considering that scientific time-lapse footage, such as of microbes or phototropism, do not qualify as live-action imagery (although they are examples of photographic motion pictures).

]]>
http://media.frametheweb.com/2009/10/18/changing-the-definition-of-cinema/feed/ 0
Variability – Fourth Principle of New Media http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/ http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/#comments Sat, 03 May 2008 19:01:33 +0000 David Witzling http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/ In describing the Fourth Principle of New Media, Manovich observes that:

“A new media object is not something fixed once and for all, but something that can exist in different, potentially infinite versions.”

This observation would seem to relate more to the experience of somebody interacting with a new media object than to an artist creating a new media object; the implications for the artist are, however, relatively straightforward. A graphic designer working with a piece of graphic design software might be given some text and images, and might then try out a number of possible fonts for the text and visual arrangements of images. The text, during such a process, is not fixed, but highly variable in its appearance. Before the advent of computerized graphic design, such a design process was much more difficult.

It would seem that a large part of why the new media attract so much critical attention relates to the dynamic nature of online content. For example, in both design and distribution, visual text is no longer a static enterprise confined to the monolithic bound book, but has become a new sort of fluid event on computer screens: electronic text can easily be resized or rearranged. Yet the identification of this variability as a central feature of new media reveals at once a contemporary cultural bias towards that which is perceived as new, as well as the continuation of a historical trend that informs how, for example, the fluidity of electronic text ought to be perceived.

That the last quarter of the 20th Century brought with it some change in cultural attitudes towards mass media seems clear; that electronic computers continue to play some part in this change also seems clear. Something, then, is new; but to then say whatever properties are found in the new media are also new, or therefore fundamental to the perception of newness, is a deeply problematic approach. The problem might stem in part from the cultural value Modernism placed on novelty, but the perceived novelty of dynamic text (be it in terms of online syndicated or database-driven content, the market for branded plain-language neologisms such as “google,” or the proliferation of commonplace semantic conventions with plain-language vocabularies such as HTML or CSS or BBCODE), for example, is not strictly a recent cultural phenomenon. In thinking about why this cultural perception exists, it might be worthwhile to consider that the history of modern typography began with Gutenberg’s invention of movable type.

Among the early effects of Gutenberg’s movable type was a decrease in the cost of obtaining printed material, and an increase in the accessibility of printed material. Much of what we see in the effects of dynamic online content is in many respects similar: computers make it more convenient to access and manipulate media objects. To assert, then, that computers have introduced fundamentally new types of manipulations might reveal useful observations in a certain context, but the overall impact of computers in practical respects relates more directly to matters of convenience.

The discussion of new media’s variability, if it suffers from being too specific in its cultural scope, is perhaps too general in its technical analysis. In asserting that “instead of identical copies, a new media object typically gives rise to many different versions,” Manovich neglects one of the fundamental reasons for the utility of digital computers: be it in copying digital video from a camera to a computer, or in copying text from one computer to another, a contributing factor to the widespread success of digital computers has been their ability to make exact copies of things in a way that is impossible with many traditional media. A reproduction of a chemical photograph changes the image being reproduced because the reproduction introduces an additional amount of grain into the image; duplicating a digital picture file neither requires such a change in the product, nor do the economics of mass production and distribution imply greater costs for this increase in the accuracy of replicability.

It could be argued here that the replicability of new media objects encourages their modification, in virtue of the fact that such modifications to the product as the “customization” of a product’s use and behavior are made more convenient to the “audience” of end-users by digital computing (in virtue of the fact that products can be reproduced accurately enough to contain a great many reliable “moving parts” as well as a great degree of synchronous interoperability with other devices that similarly involve many “moving parts”); this convenience as a cultural value, however, would be an anthropological observation not directly addressed in the text. The variability of new media objects is an observation Manovich makes about the medium rather than about culture, and which he derives from his observations about the new media’s Numerical Representation and Modularity.

]]>
http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/feed/ 0
Inference and Historical Analysis http://media.frametheweb.com/2008/04/29/inference-and-historical-analysis/ http://media.frametheweb.com/2008/04/29/inference-and-historical-analysis/#comments Tue, 29 Apr 2008 23:44:18 +0000 David Witzling http://media.frametheweb.com/2008/04/29/inference-and-historical-analysis/ In discussing the historical convergence of computers and the media arts, Lev Manovich asserts that:

“the key year for the history of media and computing is 1936. British mathematician Alan Turing wrote a seminal paper entitled ‘On Computable Numbers.’ In it he provided a theoretical description of a general purpose computer”

Manovich observes that the diagram of the machine Turing describes in his paper “looks suspiciously like a film projector,” and then asks provocatively: “Is this a coincidence?”

Absent any documentation to the effect that Turing’s design was directly influenced by the appearance of a film projector, any assertion that such a connection exists would best be treated as conjecture, and the appearance of a connection ought to be treated precisely as coincidence; there certainly is little to be found by way of functional similarity. The hypothetical connection between the diagram of Turing’s machine and the design of a film projector has more to do with a programmatic attempt throughout The Language of New Media to interpret the history of new media in terms of an existing body of literature on film criticism.

While we might be reasonably certain that Turing was aware of cinema, as a mathematician he was probably far more familiar with the mechanics of an adding machine. Moreover, the 1936 paper cited here by Manovich has more to do with esoteric problems of number theory than it has to do with the material properties of practical computers. The machine Turing outlined in his 1936 paper was not intended as a schematic, but rather as something more along the lines of Albert Einstein’s “gedankenexperiments.”

Turing’s machine requires an infinite strip of tape upon which symbols are printed and from which symbols are read; that the machine in this way has access to an infinite amount of memory is at once essential to its conception and also a reminder that it is impossible to physically construct such a device. The machine was meant to help visualize how the act of performing arithmetic calculations transforms information about infinite sets of numbers (such as the set of whole numbers).

Where Turing comes into the text, it is worth noting that the word “computer” in Turing’s day did not refer to machines at all, but rather to people employed for their arithmetic abilities.

]]>
http://media.frametheweb.com/2008/04/29/inference-and-historical-analysis/feed/ 0
Discrete and Continuous Modes of Representation http://media.frametheweb.com/2008/04/29/discrete-and-continuous-modes-of-representation/ http://media.frametheweb.com/2008/04/29/discrete-and-continuous-modes-of-representation/#comments Tue, 29 Apr 2008 22:19:46 +0000 David Witzling http://media.frametheweb.com/2008/04/29/discrete-and-continuous-modes-of-representation/

“The most likely reason modern media has discrete levels is because it emerged during the Industrial Revolution… Not surprisingly, modern media follows the logic of the factory.”

The argument here suggests that the way new media objects implement computer code is a product of the industrial mindset, with the implication that the values of industrial division of labor, specialization, and standardization led to the modern computer. This suggestion involves a complex set of interrelations between the thought processes introduced by industrialization, the structure of computers, and how these thought processes interact with the structure of computers when people create new media objects.

New media objects are conceived of as collections of discrete, indivisible units, such as pixels; and this conception presupposes a contradistinction to traditional media — such as sculpture or chemical photography — where surface properties vary with continuous and arbitrary degrees of detail.

The use of “discrete” here connotes precision, while “continuous” connotes imprecision: however accurately one attempts to measure the height of a bronze sculpture, for example, changes in temperature will cause the metal to expand or contract slightly on different days, contributing to an inherent imprecision in one’s measurement; a digital picture file, however, will always have the same number of pixels no matter on what day one decides to make a tally.

As it is a central feature of industrial mass production that one be able to manufacture large numbers of precisely identical objects, there are a number of superficial reasons why computers might seem to be the product of an industrial mindset: industrial fabrication techniques facilitated computers coming into widespread use, the individual components of computer hardware are in many respects both standardized in their construction and specialized in their function, and the binary code used by computers very much resembles an idealization of industrial order and production.

These congruences aside, however, the aforementioned argument as presented in The Language of New Media involves a number of substantial problems. Most obviously, the written alphabet is a system of discrete symbols: letters came into use long before industrialization, are just as indivisible as pixels in a digital image, and type set in a monospaced font falls into a grid not unlike the arrangement of pixels on a computer screen. Moreover, letters can be assigned numerical meanings: Hebrew is one example of an alphabet that does this.

There are also historical problems with attributing the discrete operations performed by computers to an industrial mindset. The history of computing machines reaches back to antiquity, and its early history can be found in such relics as the Antikythera mechanism. It could be argued that it was “the logic of the factory” that spawned the invention and design of digital computing machines, but it was, rather, a theological motivation that compelled Gottfried Leibniz in the late 1600′s to formalize the system of binary code used by today’s computers; Leibniz furthermore envisaged machines that would perform calculations using his binary system. Although it may be the case that industrialization substantially helped such computing machines in becoming a material reality, their conception lies very much apart from the industrial mindset.

While consumer use of computerized media might in many respects seem to follow “the logic of the factory” — especially as numerous commercial websites profit from user-generated content, which transforms the consumer into a type of specialized producer — the formal and material qualities of modern computerized media follow from a quite different logic.

]]>
http://media.frametheweb.com/2008/04/29/discrete-and-continuous-modes-of-representation/feed/ 0