• Goto NETFUTURE main page
  •                                 NETFUTURE
    
                       Technology and Human Responsibility
    
    --------------------------------------------------------------------------
    Issue #44      Copyright 1997 Bridge Communications          April 2, 1997
    --------------------------------------------------------------------------
                Editor:  Stephen L. Talbott (stevet@netfuture.org)
    
                         On the Web: http://netfuture.org
         You may redistribute this newsletter for noncommercial purposes.
    
    CONTENTS:
    *** Editor's Note
    *** Quotes and Provocations
          How Much Enduring Art Is There in Computer Graphics?
          Year 2000 Apocalypse?
          Allowing the World to Change
          Education: Waiting for the Outcry
    *** Surfing Ancient, Homeric Fields (Stephen L. Talbott)
          The Net and secondary orality
    *** About this newsletter
    

    *** Editor's Note

    I'm in need of a Web-worthy PC (and peripherals) for my work on NETFUTURE. If your organization has such a beast and could contribute it to a nonprofit, tax-exempt institution, please get in touch with me at stevet@netfuture.org.

    Goto table of contents


    *** Quotes and Provocations

    How Much Enduring Art Is There in Computer Graphics?

    In the February, 1997 issue of ACM Computer Graphics, Bruce Sterling responded to a question about the relation between art and computer graphics. He pointed out the ephemerality of the medium (will today's digital work of art be accessible via the disk drives, programs, and standards of twenty-five years from now?), and then went on to ask:
    And what is going on, exactly, with computer graphics' unhealthy graverobbing tendencies? What is this eerie insistence on appropriation and mutation, cutting and pasting, swiping and wiping? How come so much computer art is scanned up, and scammed up and done on the cheap? You'd think that absolute control over every pixel, and a palette of zillions of colors, and form-generation programs of unheard-of sophistication -- that all this would allow artists to create imagery of absolute novelty, images never seen before, amazing images, world-shattering images. Well, where is the stuff? Teapots, chessboards, rotating logos, chromed everything, glass bubbles and sci-fi monsters. Why is it that computer artists seem to have so little to say?
    Good question, pointing us back to the truism that having technologies for communication and expression is not the same thing as having something worthwhile to say. At least, one hopes it is still a truism; it's amazing how quickly the obsession with technical glitz can make us forget simple truths. (Thanks to John Thienes for passing along the article.)

    Year 2000 Apocalypse?

    It's becoming clear (even to formerly blase types like me) that strange and awful possibilities are constellating themselves round the "year 2000 problem." The more closely experts look at large, time-sensitive software systems, the more they are reporting back (as Bank-Boston's chief technology oficer did in the March 8 Economist) that "what we found was terrifying." That particular bank's information systems, according to the Economist, reveal a problem "far more complex than anyone had imagined."

    It turns out that the number of ways programmers can conceive, represent, and manipulate dates -- both explicitly and implicitly -- is unlimited. (You are probably familiar with julian dates, for example, where the day of the year is represented as a number from 1 - 366. But some programmers have found it convenient to use 1 - 1461, representing the number of days in a leap year cycle.) These dates may in turn be represented in programs by names betraying nothing of their time-relatedness. How do you find the relevant names and data structures among millions of lines of code, in order to fix them?

    When Bank-Boston did fix certain programs, it was plagued by system failures as soon as it linked its computers to those of newly acquired BayBank. The problems had to be solved a second time. Current estimates in the banking industry are that, overall, the fixes will cost $1 per line of code. Bank-Boston has some 50 million lines of code.

    A widely cited estimate of the long-term, global cost of the year 2000 problem comes to $1.6 trillion. The trend in such estimates is upward, not downward, and surely few corporations or government agencies have reason to announce prematurely any gathering sense of panic they may feel about their own emerging prognoses. What we have so far is an increasing number of carefully phrased statements by official spokesmen to the effect that "if such-and-such a huge task cannot be managed successfully, this or that company or agency or industry faces grave risks." Or else just the deadpan announcement of fact, such as this offered by Jack K. Horner of the Los Alamos National Laboratory:

    Various well-calibrated software estimation models (SLAM, REVIC, PRICE-S) predict that fixing the Y2K problem in systems of about 500,000 lines of code or larger will take more time than is available between now and the year 2000, regardless of how many programmers are thrown at the job. Most of the US's military command-and-control systems contain more than 500,000 lines of code. (Risks-Forum Digest, 18.96)
    What all this still omits is the larger, social dimension. At some point, perhaps very soon, some inescapable system failures -- or last-resort work-arounds with unacceptably high cost -- will become matters of public record. With public confidence shaken and the press beginning its predictable feeding frenzy, there is no telling where events might lead. The issues here are not merely technical ones, and a public whose primary education in technological assessment has so far consisted of little more than a diet of Internet hype will not likely prove wise and considered in its responses.

    Which brings me to a newsletter I was recently shown. It's by the financial adviser and professional doomsayer, Gary North. I had forgotten that one could produce a newsletter with such shameful disregard for the intelligence of one's readers. But in these 24 pages of sensationalism North embeds enough warnings from well placed officials to make reasonable people start worrying. Among his various observations and claims:

    All this leads North to posit scenarios whereby, for example, Allstate looks to be in trouble in 1999, holders of cash-value policies start demanding their money, other policyholders stop sending in their premiums, bankruptcies occur, people start selling off their mortgages, stocks and bonds, the markets collapse....

    But North's primary scenarios involve failures at the Social Security Administration and Internal Revenue Service. The challenges for these bureaucracies are undeniably huge, the history of failure massive, and the time short. North interviewed Shelley Davis, former historian of the IRS:

    I asked her point-blank if the IRS would be flying blind if the revision of its code turns out to be as big a failure as the last 11 years' worth of revisions. She said that "flying blind" describes it perfectly .... Then she made an amazing statement: the figure of 11 years is an underestimate. She said that the IRS has been trying to update its computers for 30 years. Each time, the update has failed. She said that by renaming each successive attempt, the IRS has concealed a problem that has been going on for 30 years.
    North claims that system failures affecting Social Security checks are virtually certain, leading again to the bank and market collapse scenario. The IRS in turn depends upon the Social Security computers for data about taxpayers. As the problems ripple from Social Security through the IRS, citizens will stop providing correct information on their tax forms. The government will collapse.
    
    
    
    Well, the point is that North's alarmism is as much part of the total
    picture as the purely technical work to be done.  Would it take more than
    one or two high-profile failures to push events along one or another
    out-of-control trajectory?  There don't seem to be many left who are
    willing to deny the chaotic possibilities outright -- in which case it's
    hard to justify the term "alarmism" above.  Given the scale of the
    potential disasters, however remote their likelihood, what can one be if
    not alarmed?
    

    How did we get here? That is the question upon which I hope to offer some commentary in the future. There will undoubtedly be much finger-pointing throughout society, but the interesting thing to me is how hard (and unprofitable) that exercise turns out to be if one wants to identify real guilt. We need, rather, to look at the overall relation between technology and society, colored as it is by attitudes in which we all participate. Clearly there is something amiss in the casual way we have been marrying social structure to programming technique, and we need to understand just what this is.

    Unfortunately, the social atmosphere in coming days may not be very conducive to clear-headed analysis.

    Allowing the World to Change

    The Wall Street Journal (March 24, via Edupage) ran a story about the introduction of a single currency (the "euro") by the European Monetary Union. You can imagine the challenge in finding and correcting all mentions of currency in existing software -- a problem made worse by the fact that some nations (Italy and Spain) eschew decimals in prices, so that software in those countries is not equipped to handle decimals. There is also conflicting usage of decimals and fractions when quoting security prices. Supposedly, the euro problem rivals the year 2000 problem in complexity and ubiquity.

    All of which reminds me again of that observation by Frederick Brooks, Jr., about complex expert systems becoming "crashingly difficult to maintain as the world changes." (See NF #16.) The year 2000 problem might be seen as the result of shortsightedness or a series of mistakes. But the euro problem exists because the world has changed unpredictably, invalidating the assumptions upon which the software was built.

    You can't prevent people from changing their world. Or, rather, you had better not, if you don't want to abort the entire human enterprise. But as the global technical system is fitted ever more snugly to the transactions of today's world, a crucial question is to what degree it will become a straitjacket, hindering our ability freely to create tomorrow's world.

    At a time when concepts ranging from "money" to "nation-state" are necessarily fluid and subject to creative change, we may experience powerful if subtle pressure to allow the technical system rather than human wisdom to dictate the terms of that change. Actually, though, the pressure of hundreds of billions of dollars in software modification costs upon the mind of a policymaker probably won't be all that subtle next time around.

    Education: Waiting for the Outcry

    The following story, from NETFUTURE reader David Kulp, doesn't carry much surprise value for most readers of this newsletter. But it is a story being repeated daily in thousand different cities. Why aren't the cries of outrage deafening us by now?
    Regarding the on-going saga of computers in the schools, I thought I'd share my recent discovery. As a PhD student with no children, I am disconnected from the grade school developments, but tonight I dropped by my advisor's house to review some papers. My advisor has two children about 10 and 11 attending the local elementary school. On the table was an "assignment" for Travis. Here I saw a typed sheet with fill-in-the-blank HTML. The project was to create your own "homepage" -- the most important elements being choosing an RGB background color and picking a suitably gargantuan image to place in your document. The project was for sixth graders.

    Now I wondered, did anyone consider the worthiness of such an endeavor? It truly saddened me to think that blind technology mongering had found its way so painlessly into the classroom. What is a child learning from such an exercise? Is it to instill an enthusiasm for science and technology or to teach HTML? Or maybe surreptitiously improve writing skills? In how many ways might the hours devoted to this type of project be better spent?

    Surely there is enough hype in the media and elsewhere to suitably "excite" children about science and technology (if that is considered a worthy goal at all). And HTML is a scripting technicality that no future adults need concern themselves with. As for writing skills, the recommended sentences were mostly of the form: "To see ..., click *here*."

    And so what was the take on this from the family? Travis was thrilled to see himself "on the net," and mother was proud to see her son gain the valuable skills needed in the technology burdened next century. The only concern from Mom: make sure that no photo of young Travis should be displayed with a caption indicating his name, lest the child be stalked or stolen.

    Regards,
    -david kulp (dkulp@cse.ucsc.edu)

    And Lowell Monke continues his reporting from the Des Moines public school system:
    Right now we are installing the "enterprise system", a multi-million dollar accounting and record-keeping system across the district. Every school will be hooked up to it. Since it really lies outside any connection with instruction, it was approved last spring, separately from what I have been involved with. But it is now having a noticeable impact on instruction, as I found out last week.

    My lab got upgraded at Christmas time. Seventeen new Macs and two new IBMs. Part of the federal grant we got for the upgrade was to go for networking all of them with ethernet, which would tie them all to the building network and the Internet, which will allow my students to learn how to do web page design, Perl, Java, etc. Last week when I started pushing a bit to get the work done, I found out that the network technicians in the district had all been assigned to work on the enterprise system and the department was not accepting any new work in the district until after that project was done - which will be in July if there are no more delays. This leaves me with a lame lab -- the Macs can function, very slowly, through the existing Appletalk network, but the IBMs can't even get to a printer. How many other teachers have been affected by this I don't know. But it appears that we are already entering the slow-grind-to-a-halt phase of this technological expansion. We already don't have enough technicians to get the equipment to do what we justified purchasing it for and we haven't spent a dime of the millions the state has allocated for new computer purchases. And as I said before, there are no plans that I know of to hire more technicians. (I'm sure the fur would really fly if they did in light of the 104 instructional staff positions that have just been eliminated.)

    Of course, to a teacher like myself it seems ridiculous to be told that an institution dedicated to educating kids sets a higher priority on getting its record-keeping system in place than making its instructional equipment operable. (To their credit, the network people have helped us plan the network layout and approved our going outside the district to hire someone to come in and install the network -- which may be the way we get around the politically hot problem of hiring more technicians: we'll just contract the work out.)

    Lowell (lm7846s@acad.drake.edu)

    SLT

    Goto table of contents


    *** Surfing Ancient, Homeric Fields
    
    From Stephen L. Talbott (stevet@netfuture.org)
    

    (The following notes should be construed as no more than a prospectus for a paper that I, or someone, needs to write. A fuller survey of the literature might change this or that detail of the argument, but I'm convinced that the general thrust of the piece is correct -- and long overdue.)

    
    
    
    The Net -- that unfortunate catch-basin for unrestrained speculation and
    projected nonsense -- is, according to a persistent story in both popular
    and academic discussions, returning us to ancient, Homeric fields, where
    we are destined to engage anew in heroic, tribal contests, while nobly and
    collaboratively composing our own epic masterpieces worthy of the great
    bard himself.
    

    The evidence presented for this romantic expectation typically includes references to flame wars, the immediacy of electronic communication, interactivity, the overcoming of time and space, and the Net's fragmenting ("tribalizing") influences -- and almost always calls upon the notion of "secondary orality." Communication on the Net, we're told, has been shown to possess many qualities characteristic of earlier, oral cultures; ergo, we must be tending toward oral culture ourselves. As one commentator has put it:

    The implication of these oral qualities in CMC [computer-mediated communication] forums is that, ultimately, new discourse communities are created, with vast political, cultural, and social implications, recreating the immediacy of pre-literate cultures, but adding on space- and time-independence. (December)
    A second scholar, writing in a rather more lively vein, says that
    by means of our computers, telephones, televisions, VCRs, CD players, and tape recorders, Hypertext breaks into our cozy study, grabs us by the scruff of the next [sic], and plunges us full-bore into the advent(ure) of secondary orality. Surprisingly enough, hypertext embodies and enacts those distant and exotic aspects of primary orality by plunging us deeply into what some call the Matrix, what others call the Net, and what still others call Cyberspace. Orality isn't just a quaint antiquarian area of study anymore -- it is an apt description of the world into which we're hurtling ever deeper every day. (Fowler)
    But, of course, it required John Perry Barlow to set it to a salable rock beat:
    Soon most information will be generated collaboratively by the cyber-tribal hunter-gatherers of cyberspace. (Barlow: p. 90).
    While there is a bare grain of truth to the idea of secondary orality, the overwhelming weight of the evidence tends in a direction nearly opposite to the various suggestions we have just heard. Computers and the Net have accentuated certain underlying tendencies of literate culture -- especially the peculiarly one-sided and unhealthy tendencies -- and in doing so move us even further from those ancient societies now being idealized.

    It is not a bad thing to be moving further away from the past -- and toward our own future -- but the growing imbalances in our current, machine-mediated culture are worrisome. Nowhere are these imbalances more visible than in the very rhetoric that claims we are escaping our former one-sidedness in a return to the past.

    What Did Ong Say?

    Walter J. Ong fathered the discussion of "secondary orality," a phrase he used most definitively in his 1982 book, Orality and Literacy: The Technologizing of the Word. What is not so often mentioned, however, is that Ong saw a double potential in "the electronic transformation of verbal expression": on the one hand, it intensifies certain qualities of print cultures; but, on the other hand, it brings "consciousness to a new age of secondary orality" (p. 135). Ong was here talking about all electronic media -- in particular, telephone, radio, television, and tape recordings. His few passing mentions of the computer in Orality and Literacy assign it only to one side of the ledger. That is, he mentions how it strengthens certain tendencies of print culture, not how it leads toward a new orality:
    Writing ... initiated what print and computers only continue, the reduction of dynamic sound to quiescent space, the separation of the word from the living present, where alone spoken words can exist. (Ong: p. 82)
    And again:
    The sequential processing and spatializing of the word, initiated by writing and raised to a new order of intensity by print, is further intensified by the computer, which maximizes commitment of the word to space and to (electronic) local motion and optimizes analytic sequentiality by making it virtually instantaneous." (p. 136)
    That is what Ong has to say about the computer as it relates to orality and literacy. Of course, he was writing before the widespread use of email, not to mention chat rooms and MUDs. The contemporary computer will occupy us below. But one thing is clear: on the strength of his fragmentary indications, we cannot saddle Ong with responsibility for all the confusions about orality in which Net punditry is awash today; if anything, what he did say appears flatly to contradict the prevailing notions.

    In what follows I summarize several objections to the argument that electronic text somehow carries powerful echoes from our oral past. In doing so, I draw upon the received distinction between "primary oral" and "literate" cultures, despite my discomfort with this simplistic dualism and with the associated notions of material causation.

    Confusion Between Local and General Trends in Orality

    My first complaint is that the literature on secondary orality commonly mixes up two questions:

    To answer yes to the first question is not necessarily to say yes to the second. In fact, the opposite connection appears more likely. To the extent that computer-mediated communication replaces office and workgroup meetings, telephone conversations, lunch engagements, face-to-face town-hall politics, or encounters with librarians, sales clerks, and bank tellers -- and to the degree in general that time at the computer substitutes for time with family, friends, and passers-by -- to that degree the computer is encouraging us to be less of an oral society. Obviously, everything depends upon what our online communication replaces, and it seems clear enough that the number one computer application -- email -- substitutes for a huge range of formerly oral encounters. Just consider the consequences of telecommuting and the distributed workplace.

    Any residual, oral characteristics we find in all this electronic communication can hardly outweigh the immediate fact that we've been moving toward the abstraction of the detached, soundless word; once-oral exchanges have been giving way to textual ones. To argue for an overall trend toward orality is rather like claiming that our venture into public spaces in our automobiles has made us into a more strongly communal society, on the ground that there are expressions of "secondary community" on the roads and highways. Perhaps there are such expressions, but they are not evidence for the supposed trend.

    This confusion shows up in many variants, depending on which aspect of orality is being discussed. Richard Lanham, invoking Ong's secondary orality, writes that

    the interactive audience of oral rhetoric obviously has returned in full force .... The electronic audience is radically interactive. (Lanham: p. 76)
    If by "electronic" he means "television," his point is scarcely comprehensible. If he means, "computer-mediated," then we need to ask what sort of activity our online interactions stand in for. It's not at all clear that the computer is about to replace television or book-reading; but it is all too clear that we can now execute in relative passivity many tasks that formerly required us to get up and engage people. This does not look like the return of interactivity "in full force."

    Confusion Between Primary Orality and Literate Orality

    My second complaint has to do with another confusion prevalent in the literature. These two questions are not clearly distinguished:

    One can answer yes to the first question without assenting to the second. It has never been the claim of any sane person that literate cultures lack orality. Even before the advent of electronic media, we spoke and listened in myriad different contexts. A huge amount of the business and play of every known society has been conducted orally. What those who have studied the matter tell us is not that highly literate societies lack orality, but rather that their orality is a "literate orality," differing greatly from what we find in primary oral cultures.

    To say, then, that parts of our culture are becoming more oral -- even if it were true -- would not automatically lead to the conclusion that this orality is carrying us even vaguely in the direction of primary orality. One must first show that this new orality is closer to primary orality than to the literate sort -- a challenging task when you consider that the supposed new orality is distinguished by its textuality!

    Some of my further remarks will help to bring out differences between primary and literate orality.

    Primary Orality and the Preservation of Culture

    A third complaint: the explicators of secondary orality simply ignore what has been cited as the most central characteristic of oral cultures. The literature on orality and literacy rests above all on Eric A. Havelock's 1963 work, Preface to Plato, and in that work Havelock is almost fanatical in his emphasis upon one thing: the formulas and recited compositions of an oral culture are what hold the society together. They are tools of education, preserving the laws and customs, the political and moral relations, the religious practices, even a general knowledge of technical skills ranging from those of the warrior to those of the artisan to those of the homemaker.

    Havelock (using different terminology than I would choose) writes that "oral verse was the instrument of cultural indoctrination, the ultimate purpose of which was the preservation of group identity (p. 100). The Muses, he says, "are not the daughters of inspiration or invention, but basically of memorization. Their central role is not to create but to preserve" (p. 100).

    This seems a long way from the Net, where (in Bruce Sterling's words) "computers swallow whatever they touch, and everything they swallow is forced to become as unstable as they are." Noting that there are no formal archives on the Net, and that in five years there will probably be no way to access a current Web page, Sterling goes on:

    In the 1990s, we produce computers that are high-tech sarcophagi with the working lifespan of hamsters. The English-language computer industry has the production values, and the promotional values, and even the investment structure of the couture industry. (Sterling: p. 79)
    The fact is that computer-mediated communication proves far more corrosive of culture and tradition than preserving of it -- a revolutionary virtue often celebrated on the Net. This directly contradicts the most fundamental principles said to be at work in primary oral cultures.

    The Sounding Word

    In the fourth place, those who discern secondary orality on every hand typically ignore the fundamental role played in oral cultures by what Ong calls the "sounded word as power and action" (Ong: p. 31). Sound penetrates us and resounds within us, meshing us into a living environment. "The spoken word," Ong reminds us, is always an event, a movement in time, completely lacking in the thing-like repose of the written or printed word" (p. 75).
    Sound cannot be sounding without the use of power. A hunter can see a buffalo, smell, taste, and touch a buffalo when the buffalo is completely inert, even dead, but if he hears a buffalo, he had better watch out: something is going on. (p. 32)
    Havelock explains how the Muses achieve their preserving role in considerable part through the music of the spoken word. It is the word married to rhythm, beat, melody, and harmony -- and through all these to bodily movement -- that lives on from one generation to the next.

    I don't suppose you could find a state of affairs more antithetical to computer-mediated communication, where both "speaker" and "hearer" sit rigidly at their terminals, sacrificing expressive, musical gesture for staccato typing, while emitting and receiving text that probably penetrates the body and senses less fully than any previous manifestation of the word, if only because our concentration and attention spans suffer under the distractions of the electronic age.

    Typical Character of Primary Orality

    A fifth objection, closely related to the previous two: those who speak about secondary orality also ignore the typical, or paradigmatic, character of communication in an oral society. "The artist," according to Havelock, "cannot yet voice some specific and personal creed of his own. The power to do this is post-Platonic (Havelock: p. 95). And then he quotes the classicist, Adam Parry:
    The formulaic character of Homer's language means that everything in the world is regularly presented as all men ... commonly perceive it. The style of Homer emphasizes constantly the accepted attitude toward each thing in the world and this makes for a great unity of experience. (p. 95)
    Again, this is as far from the Net's display of strident individualism and subjectivity as you could possibly get. We embrace the Net as an outlet for personal expression, as a medium for our own creative uses, even as a testing ground for invented personas. In this we share almost nothing with older, oral cultures.

    Before the Modern Subject

    Sixth, if individual expression was not a pronounced feature of oral societies -- if Homer "can have no personal axe to grind, no vision wholly private to himself" (Havelock: p. 89) -- it was because the individual did not yet exist. The "discovery" of the individual mind, or self, or psyche, as Havelock points out, occurred (in Greek civilization) during the period leading up to Socrates -- which is to say, during the period of developing literacy (p. 197-99).

    This point is so decisive that it merits the elaboration Havelock offers:

    When confronted with an Achilles, we can say, here is a man of strong character, definite personality, great energy and forceful decision, but it would be equally true to say, here is a man to whom it has not occurred, and to whom it cannot occur, that he has a personality apart from the pattern of his acts. His acts are responses to his situation, and are governed by remembered examples of previous acts by previous strong men. The Greek tongue therefore, as long as it is the speech of men who have remained in the Greek sense "musical" and have surrendered themselves to the spell of the tradition, cannot frame words to express the conviction that "I" am one thing and the tradition is another; that "I" can stand apart from the tradition and examine it; that "I" can and should break the spell of its hypnotic force; and that "I" should divert some at least of my mental powers away from memorisation and direct them instead into channels of critical inquiry and analysis. The Greek ego in order to achive that kind of cultural experience which after Plato became possible and then normal must stop identifying itself successively with a whole series of polymorphic vivid narrative situations; must stop re-enacting the whole scale of the emotions, of challenge, and of love, and hate and fear and despair and joy, in which the characters of epic become involved. It must stop splitting itself up into an endless series of moods. It must separate itself out and by an effort of sheer will must rally itself to the point where it can say "I am I, an autonomous little universe of my own, able to speak, think and act in independence of what I happen to remember." This amounts to accepting the premise that there is a "me," a "self," a "soul," a consciousness which is self-governing and which discovers the reason for action in itself rather than in imitation of the poetic experience. The doctrine of the autonomous psyche is the counterpart of the rejection of the oral culture. (pp. 199-200)
    What has since gathered into our isolated, skull-encapsulated, individual subjectivities had first to approach man from without -- from the world of nature and the gods. Man was united with this world and experienced its ensouled interior as inseparable from his own. Only with time, and only, finally, with the arrival of the Renaissance and the modern era, was his own share of this interior almost wholly cut off -- abstracted -- from the surrounding world and made into his private preserve.

    Failure to reckon with this fact, and with the different qualities of ancient and modern consciousness, underlies most of the misunderstanding about secondary orality (1). For example, the passion and immediacy of Achilles' wrath is often likened to the passion and immediacy that grips the flame war combatant. But this is to lose sight of the essential differences: ensconced within the barriers of his isolated and subjective self, the flamer hurls his projections at an essentially unknown antagonist. Achilles, on the other hand, was not yet possessed of a self to do the projecting or an interior from which to project. His anger was grounded in the immediately encompassing reality that embraced him and his antagonists, and in which they moved as characters in a drama that carried them along together.

    Achilles was vividly aware of his true circumstances, and was intensely there -- wholly present with his antagonist -- even if his awareness was not highly individualized but more like the immersive awareness of a shared dream. The flamer, by contrast, is typically not present with his antagonist -- is not even relating to his antagonist for the most part, but is only relating to himself. His isolation and inability to break through his protective, solipsistic shell is the most dramatic fact about him. His battles transpire at the opposite end of the spectrum from the heroic contests on the fields of Ilium.

    I will briefly indicate how this objection bears upon one particular line of thought -- namely, the one proposed by George Landow:

    Obviously, some parts of the reading experience [in a hypertext environment] seem very different from reading a printed novel or a short story, and reading hypertext fiction provides some of the experience of a new orality that both McLuhan and Ong have predicted. Although the reader of hypertext fiction shares some experiences, one supposes, with the audience of listeners who heard oral poetry, this active reader-author inevitably has more in common with the bard, who constructed meaning and narrative from fragments provided by someone else, by another author or by many other authors. (Landow: p. 117)
    But the "others" from whom the bard drew were not "other" as we are to each other. We today do not exist within a shared matrix of meaning where individuals have not yet distinguished themselves as selves. To patch together "narrative fragments" for them was simply to express the coherence of their world and their own interwoven existences. It was as true to say that they were the story being told as that they were telling the story.

    We, on the other hand -- having already won our selfhood -- must remain uncommonly awake while doing this patching, putting our individual stamp upon the narrative we weave, in full consciousness of the boundary between self and other, or else be tossed about by our own and society's pathologies. Before the birth of the individual, meaning comes from the world; that is where it originates. After the individual, meaning becomes what we must increasingly participate in making. To forsake this conscious participation is not to return to the world-making or myth-making of the ancients, but rather to give expression to the sickness of the self (2).

    The Abstract Image

    Seventh, the graphical image, whether on the screen of a computer, television, movie, or video game, is often said to be immediate, immersive, visceral, nonconceptual, engaging, wholistic, and concrete -- this in comparison to the more detached, cerebral, linear, and abstract nature of the printed text.

    This may be more or less true, but needs to be set beside two other facts:

    First, compared to the real world represented by these images, the images themselves are highly abstract and remote. No matter how vivid the technology, it is a far different matter to stand on a mountaintop (and how does one get there?) surveying the view than to look at the "same" view upon a screen. So a point needs to be made paralleling my first objection above: to the extent we replace the world with images of the world, we move dramatically toward abstraction.

    Second, the image itself has grown terribly abstract since, for example, the days of Homer. Particularly since the Renaissance discovery of the formal laws of linear perspective, the representational image has been assimilated to the sheerest mathematical abstractions of point, line, projection, and section. The manipulable coordinate systems of computer graphics carry this abstraction to its logical extreme. It has been argued that the world we see today is flat and two-dimensional -- which is why, to us, two-dimensional photographs look realistic, whereas to many of those not yet assimilated to western culture, a conventional photograph is incomprehensible (Barfield, 1977: pp. 65-68; Talbott, 1995: pp. 263-281).

    Leon Battista Alberti published the first treatise on linear perspective in 1435. Already at that time he "exhorted his artist-readers to learn to see in terms of ... grid coordinates in order that they develop an intuitive sense of proportion (Edgerton: p. 119. Emphasis in original). The Italian historian, Giovanni Cavalcanti, wrote in 1838:

    And thus the eye is the ruler and compass of distant regions and of longitudes and abstract lines. Everything is comprehended under the geometric doctrine, and with the aid of the arithmetic art, we see that there is a rule for ... measuring with the eye. (Quoted in Edgerton: p. 115.)
    The window and grid of the perspective artist was necessary to prevent the mind's old habits from subverting the eye's measuring capabilities. The grid successfully squelches the mind's tendency to read meaning rather than gauge spatial relations. In this sense the grid we have unconsciously plastered over our eyes complements those other tools with which, as technologists, we measure the world rather than experience it.

    Just as we must ask whether any new forms of orality are typical of literate societies or throwbacks to oral societies, so we must ask whether new forms of imagery belong more to the dominant, literate trajectory of abstraction, or instead belong to a contrary movement. Everything about computer-generated graphics and the world-replacing uses of images today suggests the deepening hold of abstraction.

    
    
    
    There are many differing potentials in computer-mediated communication,
    but the dominant tendencies, I would argue, give expression to the
    pathologies of a literate age whose one-sided preference for
    abstraction -- now being carried to an extreme by the computer -- leads
    finally to the subversion of the virtues of literacy.  This is a far
    different matter from a return to the epic heroism of our youth.
    
    
    
    NOTES
    
    1. Some authorities have discerned
    fundamentally different qualities of consciousness in ancient or
    "primitive" societies compared to our own.  Other authorities dismiss such
    claims as sheer fantasy, while decrying the value judgments implicit in
    terms like "primitive."  Both parties to the argument typically fail to
    get cleanly "outside the skin" of our modern sensibilities so as to
    appreciate the ancients for what they were instead of saddling them with
    the burden of nineteenth- and twentieth-century assumptions.  The only
    contemporary writer I know who avoids the pitfalls on both sides of this
    debate is Owen Barfield.  He has spent the greater part of this century
    (beginning with his History in English Words in 1926 and Poetic
    Diction in 1928) characterizing the "evolution of consciousness" with
    a broad sweep and a rigor of detail that is unmatched.  He pursues the
    method of historical semantics, recognizing that "the full meanings of
    words are flashing, iridescent shapes like flames -- ever-flickering
    vestiges of the slowly evolving consciousness beneath them."
    
    
    2. For an elaboration of these thoughts,
    see Barfield, 1965.
    

    BIBLIOGRAPHY

    Barfield, Owen. "The Harp and the Camera." The Rediscovery of Meaning and Other Essays. Middletown, Conn.: Wesleyan University Press, 1977.

    Barfield, Owen. The Rediscovery of Meaning, and Other Essays. Middletown, Conn.: Wesleyan University Press, 1977.

    Barfield, Owen. Saving the Appearances. New York: Harcourt, Brace and World, 1965.

    John Perry Barlow, "The Economy of Ideas: A Framework for Rethinking Patents and Copyrights in the Digital Age," Wired vol. 2, no. 3.

    December, John, "Characteristics of Oral Culture in Discourse on the Net." Paper presented at the twelfth annual Penn State Conference on Rhetoric and Composition, University Park, Pennsylvania, July 8, 1993. Available at http://www.rpi.edu/Internet/Guides/decemj/papers/orality- literacy.txt.

    Edgerton, Samuel. The Renaissance Rediscovery of Linear Perspective. New York: Basic Books, 1975.

    Fowler, Robert M. "How the Secondary Orality of the Electronic Age Can Awaken Us to the Primary Orality of Antiquity." Interpersonal Computing and Technology vol. 2, no. 3 (July, 1994). Originally presented at the Annual Meeting of the Eastern Great Lakes Biblical Society, April 14-15, 1994.

    Havelock, Eric A. Preface to Plato. Cambridge, Mass.: Harvard University Press, 1963.

    Landow, George P. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins University Press, 1992.

    Lanham, Richard A. The Electronic Word: Democracy, Technology, and the Arts. Chicago: University of Chicago, 1993.

    Ong, Walter J. Orality and Literacy: The Technologizing of the Word. New York: Routledge, 1982.

    Sterling, Bruce. "The Digital Revolution in Retrospect." Communications of the ACM vol. 40, no. 2 (February, 1997).

    Goto table of contents


    *** About this newsletter

    Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.

    NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .

    Current and past issues of NetFuture are available on the Web:

    http://netfuture.org/

    To subscribe or unsubscribe to NetFuture:

    http://netfuture.org/subscribe.html.
    Steve Talbott :: NetFuture #44 :: April 2, 1997

    Goto table of contents

  • Goto NETFUTURE page