Parsing the Languages of the New Media » Principles of New Media http://media.frametheweb.com A critical examination of Lev Manovich's Language of New Media. Fri, 03 Jun 2011 00:35:46 +0000 en-US hourly 1 http://wordpress.org/?v=3.4.2 Transcoding – Fifth Principle of New Media http://media.frametheweb.com/2008/05/04/transcoding-fifth-principle-of-new-media/ http://media.frametheweb.com/2008/05/04/transcoding-fifth-principle-of-new-media/#comments Sun, 04 May 2008 20:07:43 +0000 David Witzling http://media.frametheweb.com/2008/05/04/transcoding-fifth-principle-of-new-media/ The Fifth Principle of New Media describes how:

“the logic of a computer can be expected to significantly influence the traditional cultural logic of media; that is, we may expect that the computer layer will affect the cultural layer.”

This Principle of New Media is the least well-defined, in part due to the unusual technical term used to name it, and in part for how it draws very general cultural considerations into what is otherwise primarily a discourse about the mechanical features of computers.

Transcoding is a technical term in computer science that relates, as Manovich notes, to the translation of information from one format to another; it is an important feature of this technical term, however, that the translation occurs within a computer system. Transcoding is the translation of information from one digital format to another; thus, printing a digital photograph onto paper does not qualify. The use of the word “transcoding” is unfortunate because it deprives readers of linguistic intuitions that might be derived from a more familiar term; the use of the word as a metaphor is also problematic, because culture is neither a “format” nor a product of the types of formal relationships that govern computer formats.

While it might be Manovich’s intent in this case to argue that our experience with computers colors how we view cultural activity — that computers make us see cultural activity in a more “computerized” sense — it is important to understand Manovich’s treatment of this term as an analogy, rather than a statement about formal equivalency, or one that implies a strong causal relationship. Similar analogies have arose in the past: after the invention of the mechanical clock, for example, it became popular in Western science to approach cosmology as though one were studying a clock-like mechanical device.

Although it is undoubtedly the case that computers have had some impact on culture, just how this effect is to be understood as substantially different from the technological impact of more traditional media is unclear. In terms of the linguistic consequences of media on culture, it is worth noting that following the widespread cultural acceptance of television and radio, for example, the English language gained a new colloquialism: “to tune out” what one finds uninteresting. The Sapir-Whorf Hypothesis in linguistics, which asserts that language plays a central role in what features of the world we can readily perceive, suggests that the emergence of such colloquialisms as “tuning out” might have consequences more profound than simply the availability of particular informal expressions. Even outside of a discussion about recording media, one can find in Christianity or Islam a mystical, cosmological significance attributed to the word.

Described as “a blend of human and computer meanings, of traditional ways in which human culture modeled the world and the computer’s own means of representing it,” the cultural effect of “transcoding” is understood as affecting “all cultural categories and concepts.”

Given that computers were designed under considerations of precisely the reflexive relationship Manovich here identifies, it should come as no surprise to discover such a relationship present in new media. Manovich’s description of transcoding, however, privileges the relationship as proceeding from computers to culture, and largely ignores the impetus behind the trend in the opposite direction. The discussion of “transcoding” is problematic insofar as it is held within the context of an analysis philosophically-grounded as though practical computers, in their design and use, could be meaningfully understood apart from the cultural attitudes, beliefs, goals, and habits that produced computers and made their presence commonplace.

In many important respects, computers are modeled on human physiology and the ways our physiology allow us to perceive the world. The RBG color model used by computers to represent images, for example, is successful at reproducing the colors we see in the world because it is modeled on how our physiology recognizes color. Similarly, much of the work that went into designing computers as formal systems derives from Gottlob Frege‘s study of natural language.

The distinction Manovich draws between “the computer layer” and “the cultural layer” may be part of an attempt to structure a dialectic relationship between the mechanical behavior of computers and the cultural uses for computers, wherein “new media” becomes a synthesis of “computers” and “culture.” In such a case, culture would seem to carry the connotation of something organic, while computers would carry the connotation of something artificial; such a dialectic, however, would presuppose an opposing relationship between computers and culture that really does not exist as presupposed.

It is generally assumed that because computers are human inventions governed by well-defined mechanical relationships, they can therefore be more fully understood than something like culture, in which we participate, but never deliberately invented. Despite the well-defined nature of practical computers, there are a number of programmatic difficulties in attempting to formulate a comprehensive theory of computation. The way Manovich relies upon concepts drawn from computer science involves many of these difficulties.

Brian Cantwell Smith, in his essay “The Foundations of Computing” wrote:

“What has been (indeed, by most people still is) called a ‘Theory of Computation’ is in fact a general theory of the physical world — specifically, a theory of how hard it is, and what is required, for patches of the world in one physical configuration to change into another physical configuration. It applies to all physical entities, not just to computers.

“Not only must an adequate account of computation include a theory of semantics; it must also include a theory of ontology… Computers turn out in the end to be rather like cars: objects of inestimable social and political importance, but not in and of themselves, qua themselves, the focus of enduring scientific or intellectual inquiry — not, as philosophers would say, natural kinds.

“It is not just that a theory of computation will not supply a theory of semantics… or that it will not replace a theory of semantics; or even that it will depend or rest on a theory of semantics… computers per se, as I have said, do not constitute a distinct, delineated subject matter.”

The main thrust of Smith’s argument is that the idea of an all-encompasing theory of computation may be as incoherent as an attempt to formulate an all-encompasing “theory of walking.” For Manovich then to ground his theory of new media in terminology from computer science, without carefully delineating in what possible domains his assertions are applicable, presents very fundamental difficulties to the use of The Language of New Media for making valid inferences about individual new media objects.

]]>
http://media.frametheweb.com/2008/05/04/transcoding-fifth-principle-of-new-media/feed/ 2
Variability – Fourth Principle of New Media http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/ http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/#comments Sat, 03 May 2008 19:01:33 +0000 David Witzling http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/ In describing the Fourth Principle of New Media, Manovich observes that:

“A new media object is not something fixed once and for all, but something that can exist in different, potentially infinite versions.”

This observation would seem to relate more to the experience of somebody interacting with a new media object than to an artist creating a new media object; the implications for the artist are, however, relatively straightforward. A graphic designer working with a piece of graphic design software might be given some text and images, and might then try out a number of possible fonts for the text and visual arrangements of images. The text, during such a process, is not fixed, but highly variable in its appearance. Before the advent of computerized graphic design, such a design process was much more difficult.

It would seem that a large part of why the new media attract so much critical attention relates to the dynamic nature of online content. For example, in both design and distribution, visual text is no longer a static enterprise confined to the monolithic bound book, but has become a new sort of fluid event on computer screens: electronic text can easily be resized or rearranged. Yet the identification of this variability as a central feature of new media reveals at once a contemporary cultural bias towards that which is perceived as new, as well as the continuation of a historical trend that informs how, for example, the fluidity of electronic text ought to be perceived.

That the last quarter of the 20th Century brought with it some change in cultural attitudes towards mass media seems clear; that electronic computers continue to play some part in this change also seems clear. Something, then, is new; but to then say whatever properties are found in the new media are also new, or therefore fundamental to the perception of newness, is a deeply problematic approach. The problem might stem in part from the cultural value Modernism placed on novelty, but the perceived novelty of dynamic text (be it in terms of online syndicated or database-driven content, the market for branded plain-language neologisms such as “google,” or the proliferation of commonplace semantic conventions with plain-language vocabularies such as HTML or CSS or BBCODE), for example, is not strictly a recent cultural phenomenon. In thinking about why this cultural perception exists, it might be worthwhile to consider that the history of modern typography began with Gutenberg’s invention of movable type.

Among the early effects of Gutenberg’s movable type was a decrease in the cost of obtaining printed material, and an increase in the accessibility of printed material. Much of what we see in the effects of dynamic online content is in many respects similar: computers make it more convenient to access and manipulate media objects. To assert, then, that computers have introduced fundamentally new types of manipulations might reveal useful observations in a certain context, but the overall impact of computers in practical respects relates more directly to matters of convenience.

The discussion of new media’s variability, if it suffers from being too specific in its cultural scope, is perhaps too general in its technical analysis. In asserting that “instead of identical copies, a new media object typically gives rise to many different versions,” Manovich neglects one of the fundamental reasons for the utility of digital computers: be it in copying digital video from a camera to a computer, or in copying text from one computer to another, a contributing factor to the widespread success of digital computers has been their ability to make exact copies of things in a way that is impossible with many traditional media. A reproduction of a chemical photograph changes the image being reproduced because the reproduction introduces an additional amount of grain into the image; duplicating a digital picture file neither requires such a change in the product, nor do the economics of mass production and distribution imply greater costs for this increase in the accuracy of replicability.

It could be argued here that the replicability of new media objects encourages their modification, in virtue of the fact that such modifications to the product as the “customization” of a product’s use and behavior are made more convenient to the “audience” of end-users by digital computing (in virtue of the fact that products can be reproduced accurately enough to contain a great many reliable “moving parts” as well as a great degree of synchronous interoperability with other devices that similarly involve many “moving parts”); this convenience as a cultural value, however, would be an anthropological observation not directly addressed in the text. The variability of new media objects is an observation Manovich makes about the medium rather than about culture, and which he derives from his observations about the new media’s Numerical Representation and Modularity.

]]>
http://media.frametheweb.com/2008/05/03/variability-fourth-principle-of-new-media/feed/ 0
Automation – Third Principle of New Media http://media.frametheweb.com/2008/05/02/automation-third-principle-of-new-media/ http://media.frametheweb.com/2008/05/02/automation-third-principle-of-new-media/#comments Sat, 03 May 2008 02:26:52 +0000 David Witzling http://media.frametheweb.com/2008/05/02/third-principle-of-new-media-automation/ The Third Principle of New Media is introduced as follows:

“The numerical coding of media (principle 1) and the modular structure of a media object (principle 2) allow for the automation of many operations involved in media creation, manipulation and access. Thus human intentionality can be removed from the creative process, at least in part.”

Although it is certainly true that many aspects of the new media are facilitated by automated processes on computers, to identify such processes as central features or concerns of new media practices does little to clarify what aesthetic issues come into play when artists make use of the new media. To ground the discussion in what is really a material observation about the behavior of computers obscures more meaningful observations about how artists working with new media behave. It is not unlike discussing a painting in terms of how the paint dries.

The difficulty with Manovich’s approach here can be discerned in the consideration of how and why one might distinguish a new media art object made in part with automated processes on a computer from a painting made with pigments that are manufactured in automated factories; The Language of New Media contains no way to determine at what point the mediation of automated processes becomes sufficient to distinguish the new media from traditional media.

Just as it might seem odd to generalize about painters on the basis of how their pigments were manufactured, it seems odd to generalize about new media artists in terms of the automated processes they employ. While a particular artist might for some reason choose to take such processes as a thematic concern in an artwork, or adjust his or her style to the peculiarities of certain such processes, to generalize that such processes are of central concern to understanding how all other artists use a given medium would seem to generalize too much.

From the perspective of the artist, there is a certain convenience to be found in the ability of computers to automate certain types of tasks; yet in the context of new media, to identify this automation as central to the medium in a sense reduces the new media artist to a button-pusher: a consumer of automated processes rather than a creator of artworks. The effect of the assertion is similar to Truman Capote’s remark about what is perhaps Jack Kerouac’s most famous novel: “that’s not writing, that’s typing.” To diminish human intentionality in an analysis of new media is to diminish the fact that media come into use because they suit certain purposes.

It might well be argued, furthermore, that the way automation affects the behavior of the new media artist is to increase the role of human intentionality: because of the convenience with which many choices of sophisticated manipulations can be presented to a new media artist, there are more possibilities for the artist to deliberately reject. Automated processes on computers are also designed with a great deal of effort and intentionality, and there is a good deal of skill involved in learning how to make use of them – either as an artist or as a consumer.

]]>
http://media.frametheweb.com/2008/05/02/automation-third-principle-of-new-media/feed/ 0
Modularity – Second Principle of New Media http://media.frametheweb.com/2008/04/30/modularity-second-principle-of-new-media/ http://media.frametheweb.com/2008/04/30/modularity-second-principle-of-new-media/#comments Wed, 30 Apr 2008 19:02:42 +0000 David Witzling http://media.frametheweb.com/2008/04/30/modularity-second-principle-of-new-media/ The Second Principle of New Media describes what Lev Manovich identifies as “the fractal structure of new media.” This Principle holds that the resulting objects of new media practices have “the same structure on different scales.” Elaborating on this premise, Manovich observes:

“Media elements, be they images, sounds, shapes or behaviors, are represented as collections of discrete samples … These elements are assembled into larger-scale objects but continue to maintain their separate identities.”

Although there is an element of truth to this observation, the description is inaccurate in important ways: “fractal” typically refers to a type of self-similarity that manifests itself on different scales. While a digital image might be composed of discrete pixels just as a web page might be composed of several discrete JPEG images, an individual pixel resembles neither a web page nor an image file.

The main problem here isn’t with the description of new media objects as modular, but with what inferences might be drawn from Manovich’s particular description of modularity.

A more accurate description might substitute “interoperable” for “fractal.” Even so, while the parts of many industrial products — such as automobiles — are modular and designed for interoperability in a sense similar to that proposed by Manovich, so are parts of language in certain respects.

Moreover, to describe digital images as modular insofar as they are comprised of pixels is to omit an important distinction between how computers typically store visual information and how that visual information is displayed. The JPEG file format, which Manovich mentions in his discussion of new media’s modularity, does not explicitly store information about individual pixels. Rather, the JPEG format uses mathematical models to abstract visual information, then uses these models to generate pixels for display.

Pixels are representations of information, and computer programmers structure the information such that the representation can appear to users as an image. Any information that can be stored on a computer can be represented as pixels on a monitor. The way information is structured in a JPEG image is, however, quite distinct from the way information is structured in a block of text.

The information content of the above text can be interoperably represented as pixels with either of the following two images:

16-Bit Image Generated from ASCII 1-Bit Image Generated from ASCII

]]>
http://media.frametheweb.com/2008/04/30/modularity-second-principle-of-new-media/feed/ 0
Numerical Representation – First Principle of New Media http://media.frametheweb.com/2008/04/28/numerical-representation-first-principle-of-new-media/ http://media.frametheweb.com/2008/04/28/numerical-representation-first-principle-of-new-media/#comments Mon, 28 Apr 2008 17:43:04 +0000 David Witzling http://media.frametheweb.com/2008/04/28/numerical-representation-first-principle-of-new-media/ The First Principle of New Media identified in The Language of New Media is that the resulting objects of new media practices “are composed of digital code; they are numerical representations.”

When we look at a picture of the Mona Lisa on a computer screen, we do not consciously perceive digital code or numerical representations; there is an implicit phenomenological distinction being drawn between two descriptive categories: the operation of the computer (how information behaves within a computer as a formal system), and the conscious perceptions of a person interacting with a new media object through a computer interface (we see “new media objects” rather than digital code or numerical representations).

While these two descriptive categories refer to the same physical behavior (what, in a given instance, a person might be doing with a computer), they also require different vocabularies to adequately describe precisely what physical processes are involved and how these processes affect us. In the case of a person doing something with a computer, these vocabularies differ in terms of “how” something might be happening, agree on “that” which might be happening, and converge on “why” certain events result.

The First Principle of New Media elects to analyze the resulting objects of new media practices in terms of how information behaves within a computer as a mechanistic formal system. The assumption here is that analyzing new media in such a way will lead to a more fundamental or profound account of new media, compared to a study of what people do with new media; the desire behind such an assumption is often a belief that a more fundamental account of a phenomenon might be understood as objectively grounding subsequent inferences about observations of relevant phenomena — in this case, that phenomenon being new media objects.

Contemporary computers are formal machines designed to provide a syntactical specification for a set of formal symbols, the rule-governed means to manipulate these symbols, and the physical means to express these symbols using arbitrarily distinct formal systems.

The information stored on a computer is stored in a symbolic form, and numbers are just one possible way to represent these symbols. A loose analogy can be drawn with natural languages: when we have something in mind that we would like to express, we may write it down on paper so that it is expressed as chemical pigments and light waves, or we may speak it aloud so that it is expressed as air molecules and sound waves. If we say something aloud, it would not be correct to assume that the spoken expression is more or less fundamental than its written equivalent; if we want to understand natural language communication, it would not be correct to privilege the spoken word over the written word, or to focus strictly on how words exist in minds before they are communicated to others.

When we talk about information stored in a computer, we frequently talk about binary code being composed of 1′s and 0′s; the fact that we use the Arabic numerals “1″ and “0″ to represent this code is simply a matter of convention. TRUE and FALSE or ON and OFF work just as well as 1 and 0; the convention we use has little effect on how the stored information is structured so long as the convention allows exactly two states in binary opposition.

It is certainly possible to treat all the 1′s and 0′s involved in storing a software application on a computer (along with all the plain-language text unique to that software application, such as online documentation) as a single integer, represented by a very large base-2 number. However, the same 1′s and 0′s can also be viewed equivalently as instructing the computer hardware to move bits of information from one place to another; or, those 1′s and 0′s can be viewed as truth-value assertions in a long expression of Boolean logic.

In his essay, “The Practical Logic of Computer Work,” Philip E. Agre asserts that:

“The design of a computer begins with formalization — an intensive and highly skilled type of work on language. A computer is, in an important sense, a machine that we build so that we can talk about it; it is successful if its operation can be narrated using a certain vocabulary. As the machine is set to running, we too easily overlook the complicated reflexive relationship between the designer and user on which the narratibility of the machine’s operation depends. The corpus of language that was transformed in producing the machine, like any discourse or text, is necessarily embedded in an institutional and historical context, and the machine itself must therefore be understood as being, in some sense, a text.”

To describe a computer program as a long expression in Boolean logic shouldn’t be counter-intuitive: computer programs are, after all, written in artifical “programming languages.”

This means that computers are just as much logic machines as they are arithmetic machines. That the information computers store can be represented numerically is very much incidental to how the stored information is structured, and is neither a fundamental nor an axiomatic observation about how computers behave.

Why we so often talk about computers as though they are arithmetic machines (or fancy calculators) is in large part a matter of what sorts of patterns we are culturally taught to be sensitive to. A modern city-dweller well-acquainted with the cycles of the business day and traffic laws might easily see patterns in a contemporary cityscape which a farmer from 1800 might have to struggle to perceive. We are taught a good deal of arithmetic in public schools, but very little formal logic; we are furthermore taught science in the tradition of a Platonic-Pythagorean conception of nature, which holds abstract constructs to be the most fundamental description of the world. Since we know science gave us computers, why wouldn’t arithmetic therefore appear to be essential to the most fundamental description of what computers do?

Given, however, that computers are just as much logic machines as arithmetic machines, it is not at all self-evident why the numerical features of computerized information storage should be privileged in an analysis of how practical computers behave.

Furthermore, it is not unique to new media objects that they originate in a numerical form: musical notation, too, can be viewed mathematically, as can the design of architecture (or the motion of planets for that matter). In fact, the information contents of practical computers are hardly unique in their ability to be described mathematically, and their ability to be described mathematically is hardly fundamental to their operation. It is therefore just as problematic to describe computers as arithmetic machines as it is to describe the arithmetic features of new media objects as characteristic of new media.

It is perhaps more relevant to discuss contemporary computers in terms of the relative convenience with which they allow us to perform sophisticated manipulations of symbolic information.

]]>
http://media.frametheweb.com/2008/04/28/numerical-representation-first-principle-of-new-media/feed/ 0
Five Principles of New Media http://media.frametheweb.com/2008/04/28/five-principles-of-new-media/ http://media.frametheweb.com/2008/04/28/five-principles-of-new-media/#comments Mon, 28 Apr 2008 16:50:20 +0000 David Witzling http://media.frametheweb.com/2008/04/28/five-principles-of-new-media/ The analysis offered in The Language of New Media is built around Five Principles of New Media. In introducing these Five Principles, Lev Manovich proposes that:

“the last three principles are dependent on the first two. This is not dissimilar to axiomatic logic, in which certain axioms are taken as starting points and further theorems are proved on their basis.”

The use of the word “axiomatic” here implies that the first two principles are self-evident observations, and the last three principles are consequences that follow directly from the interaction of the first two; so “principle” is here used to refer to two distinct categories of logical assertions: assumptions and conclusions.

After establishing his analytical methodology as such, Manovich states in the following sentence: “Not every new media object obeys these principles.” If it is the case that not every new media object obeys these principles, then the relationship of the principles to new media is incompatible with, and therefore quite dissimilar to, axiomatic logic.  The approach Manovich proposes is perhaps more appropriately described as reductionist.

It is the very essence of axiomatic logic that it can be used to uniquely and definitively distinguish and identify logical forms. Just as one never finds a rooster that is not a chicken nor a triangle with more or less than three sides, one ought not reach conclusions using axiomatic logic that conflict with one’s axioms.

Furthermore, the formulations of the Five Principles themselves are deeply problematic, often involving contradictory implications, and at times relying upon the deductive conclusions of incompatible philosophies for evidence.

]]>
http://media.frametheweb.com/2008/04/28/five-principles-of-new-media/feed/ 0