• Goto NETFUTURE main page
  •                                 NETFUTURE
    
                       Technology and Human Responsibility
    
    --------------------------------------------------------------------------
    Issue #53       Copyright 1997 Bridge Communications         July 16, 1997
    --------------------------------------------------------------------------
                Editor:  Stephen L. Talbott (stevet@netfuture.org)
    
                         On the Web: http://netfuture.org
         You may redistribute this newsletter for noncommercial purposes.
    
    
    CONTENTS:
    *** Quotes and Provocations
          Does Your Computer Have You in Its Grip?
          Intelligent Agents and the Economics of Meaninglessness
          The Net As Womb
          The Phantom Pursuit of Computer Literacy
    
    *** Toward the Great Singularity (Part 3) (Stephen L. Talbott)
          Are the mystics of the Net yearning for the past?
    
    *** About this newsletter
    

    *** Quotes and Provocations

    Does Your Computer Have You in Its Grip?

    If you're studying the human-computer interface and are looking for a research topic, I have a suggestion for you. Latch onto a few computers and fiddle with the operating system internals so that you can collect data about the timing of certain user actions in relation to various internal states of the computer. For example, you could have the system record the internal clock time of each keystroke or mouse click that follows a substantial pause (say, five seconds).

    You might expect that the collected data would be randomly distributed; that is, initiatory keystrokes recorded close to an even second or an even minute, according to the internal clock, would be no more numerous than those recorded close, say, to twenty-three seconds past the minute. (This assumes, of course, that the clock time is not displayed on the user's screen.)

    My prediction: with some people you will find some clusterings that deviate too far from a random pattern to be easily explained. In other words, you will find that human actions we assume to be freely initiated can in fact be "entrained" by the computer in ways not currently easy to understand.

    The key requirement of the experiment is to record human actions whose initiation is independent of any identifiable system prompting, and to correlate those actions with various invisible transitions in the internal logical states of the computer, looking for more-than-random correlations. The experiment should not be run in a laboratory or any other special setting, but rather should track users in the course of their normal jobs. Many individuals with different character traits should be checked. Also, the experimental subjects should not know that the timing of their actions is at issue.

    A lot of possible correlations may need to be explored, and getting good data may require some ingenuity. Needless to say, if my prediction should turn out to be correct, there would be profound implications.

    Intelligent Agents and the Economics of Meaninglessness

    The Economist recently pointed out the bind that developers of intelligent shopping agents are in. When comparison shopping becomes widespread, prices will be driven down to the uniform levels typical of commodities, and
    once prices are the same everywhere, there is no need for an intelligent agent; any store will do. In other words, when agents do not work, they are useless; when they do work, they might render themselves unnecessary. (June 14, 1997)
    I would add that intelligent shopping agents fit very well into what you might call "the economics of meaninglessness," because the consumer's purely numerical interest in the lowest possible cost and the producer's interest in maximum profit are allowed to eclipse everything meaningful about the transaction. The question, "What am I paying for," in all its qualitative dimensions, tends to fall out of the picture. It doesn't necessarily do so, but that is the easiest way for things to "drift."

    We speak of the new service economy, but, actually, every product you and I buy has always come embedded within a matrix of services (or disservices) to us, to the larger society, and to the world in which we live. Personally, I pay up to twice the "friction-free" cost even for a commodity like potatoes, because I care about how healthily they are grown, how the farmer's cultural methods affect the soil, and what kind of organization is selling them, among other things.

    Fortunately, the need to locate economic transactions within a meaningful context is beginning to make itself felt on the Net. An earlier issue of the Economist, citing the proliferation of mediating services, had this to say:

    The meteoric rise of such go-between services is ironic, because just a few years ago most people thought the Internet would spell the end of middlemen. "Disintermediation" was the mantra; with all the information anyone could want just a mouse-click away, who needed brokers? The Internet "will extend the electronic marketplace and make it the ultimate go-between, the universal middleman," wrote Bill Gates, Microsoft's boss. Software, he predicted, would search the networks, locating information and collating it to help you make a buying decision, even haggling on your behalf with the seller's software. "Often the only humans involved in a transaction will be the actual buyer and seller .... This will carry us into a new world of low-friction, low-overhead capitalism, in which market information will be plentiful and transaction costs low. It will be a shopper's dream." (May 10, 1997)
    To which the article's writer replied, "Not yet. Perhaps not ever."

    The paradigm for contextless economics, where (in Gates' words) the only humans involved are the actual buyer and seller, is probably the deal between the prostitute and her john -- two isolated people who never make genuine contact, and whose transaction is kept shielded from the larger web of social relationships and responsibilities, as if it had no bearing on them. But, of course, the bearing is all the more acute precisely because of the destructive pretense that it is absent.

    The Net As Womb

    "We have come to believe that convenience is necessity."

    Noelle Oxenhandler, writing in The New Yorker (June 16) takes that unpromising, if accurate, observation and weaves around it one of the most remarkable articles I have seen in a long time. She shows how "modern life has made waiting a desperate act," depriving us of much of the essential content of our lives. This, of course, is a common observation, but she demonstrates its truth with a profundity that will not leave you untouched.

    Along the way, she notes that the fall from grace is a fall from simultaneity; it is "exile from the garden where the fruit is at hand and to desire is to eat." While she does not have much to say about the Net, she provides, I think, an important metaphor for understanding high technology today:

    Suffering the curse [of the fall from grace], is it any wonder we take such delight in the presto of our own inventions? With each movement toward the instantaneous, we put a toe back in Paradise, that place where to wish is to have. With each movement toward simultaneity, we are back, however fleetingly, in the womb, where we don't even need to wish in order to be warm, held, fed.
    I can't begin to convey the rich texture of Oxenhandler's many insights, but will offer this one excerpt as a taste of what she offers:
    Some of my most pleasurable moments have come when I allowed myself to sink into the feeling that something was taking place without -- or in the aftermath of -- my conscious intervention: to sit in the sun, reading a book, while under a damp towel and the shade of my chair a ball of dough rose in a bowl or six glass jars of hot milk thickened into yogurt; to lie on my back in the grass watching the clouds, my wet mop beside me, while inside the house the water shapes dried on clean, lemony wood floors. The precious "while" that follows when you have done your part, and surrendered the work of your hands to powers as great as sun, air, and time.

    In the same way, I used to love the feeling of dropping a letter into the box. For several days, the letter-in-transit would hover around the edges of my consciousness. This delay was an intrinsic part of the pleasure of letter-writing. It had a special tense all its own: when the "must do" turns into the "just done." And, as the letter hovered, I also savored a kind of prescience in relation to my friend. During those two or three days, I knew, at least in some small measure, what must befall her: a letter in the box! As Iris Murdoch has written, "The sending of a letter constitutes a magical grasp upon the future." But now the old magic has given way to the new. And though a fax or an E-mail may lie in wait for its recipient, it nonetheless gets from here to there in a matter of moments, and its waiting has none of the sealed mystery about it that attends a letter in its envelope....

    What we lose ... is the richness of the intermediate zone -- the mysterious lushness of the latent, the almost, the not-yet. This is why, for some women, pregnancy is a uniquely redemptive experience, the one time in their lives when they feel released from the tyranny of Do It Now. This is the giddiness, the sense of permission, that pregnancy bestows, when just to be -- to eat, breathe, and sleep -- is to do.

    (Thanks to John Thienes for bringing this article to my attention.)

    The Phantom Pursuit of Computer Literacy

    The New York Times ran a piece on computer literacy (July 7) that began by quoting Todd Oppenheimer to the effect that "cutting other school subjects to make room for computers may be educational malpractice." The article continued:
    That is beginning to seem like an understatement. What, after all, does computer literacy consist of? Unlike ordinary literacy, it is not even required to obtain other forms of knowledge. One can be mathematically literate, literarily literate, musically literate and historically literate without knowing a single command in a computer language.

    Of, course, real computer literacy requires training in logic, in clear thinking, in the values of planning, in economy and efficiency, in the ability to manage information and correct errors [for none of which, I might add, is a computer required]. But, right now, the term computer-literate usually just refers to the ability to use a few commercial products and perhaps being able to smoothly touch-type.

    The article cites Joseph Weizenbaum on the point that computer languages are not like natural languages; there is no particular advantage in learning them while young.

    Look for the press's coverage of the technology-in-education backlash to reach a crescendo with the beginning of school in the fall.

    (Thanks to Susan Barnes for passing this item along.)

    SLT

    Goto table of contents


    *** Toward the Great Singularity (Part 3)
    
    From Stephen L. Talbott (stevet@netfuture.org)
    

    Quite apart from the general millennium madness, we've heard in recent years of the end of modernity, the end of history, the end of science, the end of the old world order, and, from the prophets of the Great Singularity, the end of the human era. We also hear from many sides that "we are not alone." UFOs, crop circles, alien abductions, angels of light, channeled gurus -- these and other signs of visitation have impressed startling numbers of our contemporaries.

    Nor has more strait-laced society escaped the general mood. Try to find a popular book on frontier science that does not strive, in its tone and word choice, toward a kind of faux-mystical profundity -- this despite the typically dead, abstract nature of the actual content.

    All these symptoms suggest a kind of expectancy, a consciousness of significant thresholds, that has gripped us in recent years. The idea of a critical threshold lying on the fine line between order and chaos provides much of the grist for complexity theorists; the new catastrophism of the astronomers, with its wary eye toward errant comets and asteroids, invites us to think open-endedly about our future (or results from such thinking); and, via a revived geological catastrophism and the theory of punctuated equilibria, there is even a tendency to reconceive the past in the same open-ended and "expectant" spirit. A kind of secularized eschatological mood prevails.

    This mood itself is something profoundly new, and therefore fulfills its own sense of expectancy after a fashion. But I think we need to look more deeply. In the face of broad, cultural shifts, it is always good to consider, not only the surface, kaleidoscopic changes in our working ideas, but also the evolving weave and texture of human experience -- that is, the changing qualities of our thinking and perceiving, the shifting boundary between self and world, and the progress of our developing self-awareness. Much of every historical movement occurs beneath the conscious play of ideas.

    The fact is that, due to this underlying evolution of consciousness, some ideas become nearly inevitable for us, while others become nearly impossible. For example, the historian Herbert Butterfield says this about the birth of modern science:

    It is fairly clear in the sixteenth century, and it is certain in the seventeenth, that through changes in the habitual use of words, certain things in the natural philosophy of Aristotle had now acquired a coarsened meaning or were actually misunderstood. It may not be easy to say why such a thing should have happened, but men unconsciously betray the fact that a certain Aristotelian thesis simply has no meaning for them any longer -- they just cannot think of the stars and heavenly bodies as things without weight even when the book tells them to do so. Francis Bacon seems unable to say anything except that it is obvious that these heavenly bodies have weight, like any other kind of matter which we meet in our experience. (1)
    Butterfield goes on to mention "an intellectual transition which involves somewhere or other a change in men's feelings for matter." He also remarks that, whereas the intellect of man had once yearned "to discover the evidence of divine caprice in the world," it was now (in the seventeenth century) the mind's aspiration "to demonstrate divine order and self-consistency."

    Likewise, Erich Heller sees the history of humanity as "a repository of scuttled objective truths, and a museum of irrefutable facts -- refuted not by empirical discoveries, but by man's mysterious decisions to experience differently from time to time." (2)

    The best place I know to go for a detailed and sensitive characterization of the evolution of consciousness -- as opposed to the history of ideas -- is the work of Owen Barfield. (3) I cannot go into that work here, beyond a bare mention of the following. When Barfield looks at our assumptions about thinking, here, too, he notes historical changes operating beneath the level of reason and logic. We today cannot help regarding our thoughts as "things" in our brain; our long-established habits of mind and the unexamined meanings of our words almost force this upon us, and our assumptions are now woven inextricably through our experience of ourselves and the world. But the situation several hundred years ago was the reverse of this: it was then impossible to picture thoughts as the product of a stimulated brain, whereas it was quite natural to picture the brain as the product of thought. (4)

    Until we recognize how the possibilities of our thinking are culture-bound -- until we gain some understanding of the evolution of consciousness operating beneath the surface glitter of clashing ideas -- we cannot be wholly free in our thinking. I am convinced, for example, that the cognitive scientist's triumphal confidence in the computational paradigm as a solution to the long-standing mind-body conundrum results much less from new, revolutionary understandings than from the nearly universal inability today to experience the earlier terms of the problem. It is much easier to explain away, or "implement," a mind whose essential knowing activity has faded from direct experience than it is when that activity presents itself with Cartesian insistence. But cognitive scientists, with little apparent interest in the historical dimensions of their subject matter, do not much inquire into such things.

    That, in any case, is a story for another time. What I do want to suggest now, however -- and it cannot be more than a bare suggestion -- is that a several-hundred-year "hiatus," during which the thinking activity as such has largely been lost from view, is slowly coming to an end. And the first consequence is that we can no longer imagine thinking to be "bottled up" in the brain in quite the customary way. Thinking, vividly experienced, is self-transparently recognized to be placeless -- that is, to be "out there," in the world, fully as much as in here," in my subjectivity, because it is in neither "place." (Do the laws of nature exist as ruling concepts in our consciousness or as powers in the world? Both, because the effective powers are the ruling ideas.) At the same time, we will unavoidably begin to realize that the brain is to thinking as the eye is to light; the brain mediates, but does not create, our thoughts.

    To enter into an activity of thinking that has long been more or less unconscious -- to embrace our role as knowers instead of disowning it as somehow alien to the known (!) universe -- this is indeed to wake up. So it turns out that those who look forward to the Great Singularity have some justification. We really do stand at a threshold, and there really is a potential for a historical awakening.

    But note what the proponents of the Singularity have done with the situation: the awakening they hail is not our awakening, but rather that of our machines; and the thinking that transcends the individual brain becomes, in their account, program logic, which is, of course, independent of any particular physical implementation. They reduce the universality of thinking to the empty universality of abstraction, and they reduce the muscular activity of thinking to the algorithmic structure of thinking's symbolic end-products.

    In other words, the Singularity celebrants represent the past rather than the future. Every threshold offers us a choice, and their choice is not to step across the divide. After all, the drive to reduce the world to abstraction (highly effective abstraction, to be sure) is nothing more than the ruling essence of the very "hiatus" from which we need to emerge. This hiatus, which has given us scientific materialism, began in the seventeenth century with the distinction between primary and secondary qualities -- the beginning of a powerful urge toward mathematization and abstraction. It has led us, finally, to a world in which we ourselves -- as opposed to our machines -- seem to have no place.

    And what will it be like if we are truly willing to step across the threshold? Who could say with any confidence? But I do believe that our central challenge is to take hold of our thinking (with genuine thanks for the discipline it has received at the hands of science) and make it an instrument for penetrating the whole world -- the world of the knower as well as of the known, and the world of qualities as well as the much thinner and more abstract world of quantities. What such a science of wholeness might look like is sketched wonderfully well by Henri Bortoft in his new book, The Wholeness of Nature: Goethe's Way toward a Science of Conscious Participation in Nature (Hudson, N.Y.: Lindisfarne, 1996).

    Meanwhile, we should not be surprised at the impulse, all too evident in Singularity enthusiasts, to carry the limitations of the past to their extreme. It may be that the first and most visible responses to every major historical transition are little more than the uncomprehending gestures of the past. To one degree or another the incomprehension doubtless shrouds us all.

    But I think we have at least one very clear choice: to look for essential change in the progress of our machines, or to look for it in ourselves.

    
    
    
    All this has been more to propose a line of criticism than to justify it
    in any detail.  I hope before long to offer further commentary under the
    heading, "The Trouble with AI."
    
    
    1. The Origins of Modern Science:
    1300-1800 (New York, Free Press, 1957), pp. 130-31.
    
    
    2. The Disinherited Mind (New York:
    Barnes and Noble, 1971), p. 270.
    
    
    3. See especially Saving the
    Appearances (New York: Harcourt, Brace and World, 1965).  For an
    introduction to Barfield's work, you might try The Rediscovery of
    Meaning and Other Essays (Middletown, Conn.: Wesleyan University
    Press, 1977).
    
    
    4. See Owen Barfield, Speaker's
    Meaning (Middletown, Conn.: Wesleyan University Press, 1967).
    

    Go to part 1 of this series
    Go to part 2 of this series

    Goto table of contents


    *** About this newsletter

    Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.

    NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .

    Current and past issues of NetFuture are available on the Web:

    http://netfuture.org/

    To subscribe or unsubscribe to NetFuture:

    http://netfuture.org/subscribe.html.
    Steve Talbott :: NetFuture #53 :: July 16, 1997

    Goto table of contents

  • Goto NETFUTURE page