• Goto NETFUTURE main page
  •                                 NETFUTURE
                       Technology and Human Responsibility
    Issue #39      Copyright 1997 O'Reilly & Associates       January 29, 1997
                Editor:  Stephen L. Talbott (stevet@netfuture.org)
                         On the Web: http://netfuture.org
         You may redistribute this newsletter for noncommercial purposes.
    *** Quotes and Provocations
          From Universal Access To Universal Paranoia
          The Walmart Syndrome
          Who Is Embedding Whom?
          What Kind of Company Do You Work For?
    *** Dorothy Denning on Cryptography Export Controls
          Software is not speech
    *** A Note on the Next Fifty Years (Stephen L. Talbott)
          Traffic Light Luddism?
    *** About this newsletter

    *** Quotes and Provocations

    From Universal Access To Universal Paranoia

    Did you notice the Great Divide over which we recently passed? Well, I didn't either--not the exact moment of crossing. But I do know this: my email box now seems flooded with missives about the bad guys and all the dangers out there. We've got to figure out how to protect ourselves against spammers, how to prevent the keywords we offer to search engines from showing up in some company's marketing profiles of us, how to keep our electronically recorded signatures out of the hands of undesirables, how to fight off the censors, how to maintain online site security, how to forestall the collapse of open standards before the onslaught of commercial behemoths....

    Not so long ago the champions of the Net had other fish to fry. Everything was a matter of freedom, openness, and the miracle of universal access. The magic of the Net was that nobody could keep us out, censorship would automatically be routed around, we were entering a new era of freedom and creative anarchy, and despotic regimes of every scale and variety were about to fall.

    How did we get from there to here? Well, if you think about it, "there" was already "here." The old stories and the new ones mutually imply each other, and their combined moral is the one I offered in my series on privacy: to the extent we are willing to present ourselves to each other as bodies of interacting data--and the assimilation of our business and recreation to the Net strongly pushes us in this direction--to that extent we become subject to all the impersonal openness of data and at the same time to all the murky countermeasures designed to protect us against the unwanted consequences of this openness.

    What we are finding is that it is impossible to map the patterns and institutions of existing society to the patterns of interacting data. This is usually stated as, "The laws of physically based communities cannot be transferred to cyberspace." But to celebrate the revolutionary implications of this fact is to disregard the fact's core: neither can our humanity be transferred in any full sense to cyberspace, or to the fields of interacting data.

    Our only hope is almost the opposite of the one the revolutionaries latched onto. It is the hope that we will so greatly strengthen our non-data interactions and offline institutions that their additional muscularity and resilience can anchor the centrifugal and dissipative tendencies of our online, more or less data-like projections of ourselves.

    The Walmart Syndrome

    Just when I thought I couldn't take another "local community fights Walmart" story, a new wrinkle occurs: activists are now expressing alarm about Walmart's plans for a massive invasion of the online retailing scene, where the company's almost unlimited capital gives it an "unfair" and dominating advantage.

    But doesn't it really come down to this: if the local (or online) community really doesn't want the Walmart, then it needn't worry. The store, if built, will go out of business for lack of customers.

    No, I don't have any problem with activism directed against particular business concerns. In fact, I'm often rather sympathetic to it. But as long as the activism remains focused on one-shot decisions by corporate "bad guys" and does not reckon with the larger context of continuing, pervasive community choice, nothing much will ever be accomplished, regardless of which way the immediate decision goes.

    We can't consign our responsibility in such matters to a compartment labeled "current activist campaigns." Our primary activism is our daily activity.

    Who Is Embedding Whom?

    No, Bill Gates' new home is not the only market for embedded computers. These small slivers of guiding intelligence already surround you, from your electric razor to your range and oven. But there's much more to come as every conceivable device is assimilated to the logic of the slivers. Embedded computers, according to David Kline (Wired, October, 1996), will stitch together "a truly universal Internet" in which "the common artifacts of daily life--a car, a TV, a CD player, a phone, a piece of office equipment, a natural gas meter, a PC--are all connected via cheap automated software in a global network."

    Kline himself seems enthusiastic about all this:

    Here, at last, is a vision of the Internet for the masses, in their hundreds of milliions and ultimately in their billions. In it, the Net is no longer just a publishing or an entertainment or a personal communications medium, but rather a fundamental and indispensable engine driving all social and economic life. It's an industrial medium that enables automated monitoring and reporting on factory-floor production; a home security and emergency response medium far more reliable than today's phone-based 911 system; a medical medium through which patient treatment plans are automatically routed to relevant providers; a consumer appliance and office equipment medium that checks the status of devices and initiates electronic repairs; a utility management medium in which power usage is read and managed remotely. You name the application, the Net will be essential to it.

    Here, at last, is an Internet finally set free from its PC-centric straitjacket--a cyberspace transformed from just another platform into an omnipresent glue that binds the whole of society, with all its trillions of daily social and economic interactions, into a truly connected civilization.

    NETFUTURE readers will be all too familiar with the feel of this rhetoric--the conviction that the next technological breakthrough is what true global transformation waits upon, and that proper physical connections are all that societies lack if they would be bound into a harmonious unity.

    Perhaps it is true, as Kline suggests, that the Internet will be set free from its straitjacket (whatever that might mean), but what of us? How will we navigate freely through our lives once we are beset by this "omnipresent glue"?

    It is amazing, but not uncommon, that a pundit should write of the Internet's freedom while never saying a word about ours. That the globalized system logic to which we adjust those "trillions of daily social and economic interactions" might constrain our future choices and restrict human expression apparently just never occurs to him.

    Could it be that Kline has yet to interact with an automated telephone answering system?

    What Kind of Company Do You Work For?

    There are two kinds of company: those who pursue worthwhile ends, applying financial discipline to help them meet their goals, and those whose end is profit, to be achieved by supplying whatever products and services do the job.

    However difficult it is to discern shades of gray as one sort of company changes into the other, the difference in principle between the two alternatives is nearly one of black and white. It is the difference between people managing the company and the company managing people. I like the analogy of the jogger who begins running for his health, but by degrees becomes obsessed with his "numbers" until finally he dies of a heart attack. Here a complete reversal occurs in the relation between health and numbers: at first the numbers are a tool for regulating the pursuit of health, but then health is sacrificed to the numbers.

    The radical yet subtle distinction between the two cases is not one that many businesses have tried to understand.

    Which kind of company do you work for? Here's one indicator. If your company is forever racing to get a product out the door just ahead of its competitors, then it is almost certainly the second sort of company. Look at it this way: if the worthwhile end your company is aiming for will in any case be achieved by a competitor a few months down the road, then the "margin of good" to which all your corporate resources are devoted is pitifully small.

    And why is the other guy your "competitor" anyway, if in fact he's trying to fill the same niche you are? If your company's primary concern were the good, and not the numbers, you would view any other company seeking the same goal as an ally.

    If, on the other hand, your competitor is pushing a less worthy solution to a problem for which you've found the socially beneficial answer, why your urgency to reach the market first? You should pull back and connect with your market--those who will choose your product as a matter of principle. This may be a small market, but it will be the only one that matters to your goal, since no enduring social good arises from consumers who are unaware of, and do not care about, the value of what they are buying.

    Moreover, be honest: when your product finally does hit the street and you're looking for the next thing, how much effort do your marketing people put into identifying what would be healthy for society, and how much into simply recognizing where the hell the market's going?

    "But what you're saying would make life impossible for my company. We couldn't survive." Just so. But as for you, there are other things you can do upon the face of the earth. The things truly worth aiming for are never things that someone else might beat you to.

    In almost all fields one finds at least a few conviction-driven companies. They tend to be small, because conviction-driven living--and working and purchasing--are not a prominent part of our society. But that, of course, is up to people like you and me--and depends upon whether we prefer cultivating a healthy society or instead playing our part in driving society toward a collective seizure of the heart.


    Goto table of contents

    *** Dorothy Denning on Cryptography Export Controls

    Often the assumptions underlying an argument are freighted with much more significance than is the immediate bone of contention. This, I suspect, is true of the debate over export controls on cryptographic programs and devices. With government spooks arrayed on one side and libertarians and commercial interests straining at their binary bits in opposition, the truly ominous development may be hidden in the natural ease with which the libertarians have secured their legal case using the seemingly uncrackable key, "freedom of speech." Who in the United States would dare try to pick that sacred lock?

    Well, NETFUTURE reader Dorothy Denning for one. Denning has thirty years of programming experience behind her and is one of the most respected deans of the Net. She happens also to be a world-class expert on cryptography. In a guest editorial for USA Today (December 26, 1996) she went straight to the point:

    Free speech is one of our most fundamental and cherished rights. But let us be wary of granting to machines, and the languages used for controlling them, the status of human beings.
    A computer science professor at Georgetown University and author of the standard text, Cryptography and Data Security, Denning expanded on her views at the RSA Conference on January 28. Among her central points:

    Denning concludes her paper this way:

    I am concerned about the long-term implications of attempting to treat software generally as fully protected speech. Software has the potential of being highly destructive. Witness the Morris worm, computer viruses, or today's concerns about attacks on information infrastructure. Future viruses might someday bring down the power grid or direct the production of weapons of mass destruction. Do we really want to consider distribution of such software as free speech? Surely no one would say "logic bombs," or viruses should have the same protection as political or religious speech, even if an author claimed to be making a political statement. Yet treating software as fully protected speech could lead us down that path.

    Export control regulations express judgments that exporting certain technological artifacts [is] harmful to the national well being and that the regulations make an important difference. It is reasonable and legitimate to question whether these regulations are serving the country. However, let us address that issue directly and squarely. Let us not muddle the issue by sweeping functional artifacts into the First Amendment. Free speech is one of our most fundamental and cherished rights. We should be cautious in applying it to the distribution of computer programs.

    I would add--as Denning has already suggested--that the attempt to smuggle software inside the sacred halo of free speech has implications far beyond the export control issue. To the extent that operational artifacts constructed of mathematics and logic are regarded as human speech, we have forgotten what is most valuable and worth protecting in ourselves. One might already have suspected that zealous halo-polishing in a nation where words are so extraordinarily cheap signifies our uneasy awareness that what really counts--what makes the right of free speech so vitally important--is somehow slipping through our fingers.

    It's one thing to say "Here I stand" when a martyr's bonfire might consume your position at any moment, or when, as on Tienenman Square, a tank might roll over it. But it's quite another when "courageously breaking taboos" is the quickest way to sell books or launch a mass movement. Free speech is being born in the first case; in the second it looks suspiciously like dying of trivialization.

    Speech is worth defending because it is the bearer of meaning--which is to say, because it expresses the interior, qualitative content of human consciousness. Recognizing this, we will not only defend the right of utterance where appropriate, but also concern ourselves with the beauty or ugliness, depth or superficiality, fidelity or obscurantism, of the meaning. It was, after all, the world of meaning--of human significances--in which we grasped the legal right of free speech in the first place. And the right can be sustained only as long as we continue to sense a certain life-or-death gravity in our words--not something, incidentally, that the Net has so far encouraged.

    When concern about the legal right becomes such an obliterating obsession that we can advantageously link even the export of encryption devices to it, then we have clearly lost our sense for where the importance of free speech lies, and the right itself is therefore at risk. Inflate a right so that it can be used for everything, and it will, in the end, prove useful for nothing.

     *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *
    I have cited
    Denning's paper
    by kind permission of the author.

    To trace many questions of language and meaning to their roots, one can look at the ongoing work in artificial intelligence. AI researchers have from the beginning misconceived the mathematically and logically instructed behavior of artifacts as inherently (rather than derivatively) meaningful. I have dealt with logic, meaning, and artificial intelligence in the chapter, "Can We Transcend Computation?" in The Future Does Not Compute. It is available online.


    Goto table of contents

    *** A Note on the Next Fifty Years
    From Stephen L. Talbott (stevet@ora.com)

    In honor of the first fifty years of computing, the editors of Communications of the ACM have put together an impressive February issue celebrating a range of hopes for "The Next 50 Years". Some 43 representatives of computer technology and culture were invited to submit their hopes in the form of brief essays. I was privileged to be one of these, and have reproduced my essay below (by permission).

    Before long, the table of contents (at least) should be available at the ACM site. However, if you want the full, undiluted dose of opinion--wild, woolly, and otherwise--about our electronic future, pick up the February issue (vol. 40, no. 2) for yourself.

                              ASLEEP AT THE KEYBOARD
    I hope that during the next fifty years we will awaken to the challenges and risks of the intelligent machinery to which we increasingly defer. Just as, fifty years ago, only "eccentrics" worried about their contributions to the local landfill or the materials they spread over their lawns, so today it seems quixotic to suggest that we must each accept personal responsibility for the quality of our daily interactions with computing devices. But it is critically important that we do so.

    The global mesh of tightly woven, mechanical logic constrains us ever more closely, a fact we do not even notice unless we are uncommonly awake. From the way we are ushered through metropolitan traffic by automated light signals (very different from, say, walking through a crowd in the park), to the way we transact our financial business at ATMs (not at all like approaching friends or neighbors for a small loan), to the way we compose and scroll through email messages keystroke by keystroke (quite unlike the expressive give and take of two persons facing each other)--in all such ways we accustom ourselves to mechanized interactions.

    It is not that traffic lights are bad. As little as they may be heeded, I still would not want to try crossing Manhattan without them. But we need to ask how we are conditioned by the machines with which we coexist so easily, and how we can counterbalance our one-sided relations to them.

    I mention traffic lights because they probably seem a trivial illustration of intelligent machinery. Yet even so, strange things happen as the lights orchestrate our complex, collective movements through the city. Perfectly decent people--you and me!--spew more or less audible streams of vitriol toward anonymous folk who innocently interfere with our efficient passage through the mechanized system.

    This does not so readily happen in the city park, even though a heavy crowd might pose similar inconveniences. In the park we have to do more directly with persons rather than with a system, so our reactions differ drastically--despite the inefficiencies of movement. In fact, the absence of mechanically mediated efficiency as the overruling logic of the park no doubt accounts for some of the difference in our responses.

    If it requires a certain waking up to realize that behind the various mechanisms of the traffic jam there stands a society of persons--and even more waking up to hold onto the fact--the demands upon us are many times greater as we yield ourselves to the mechanized efficiencies of a globally networked world. These efficiencies are now producing an orgy of celebration and excited anticipation--which is already a bad sign, because only those who do not realize the difficulty of remaining awake while being cradled and rocked by technology could feel such unqualified anticipation.

    Efficiency, perceived as an end in itself, is always anti-human. The only human question about efficiency is, "Efficiency toward what end?" If the end is, say, to eliminate a human population, uncommon efficiency does not look like a positive achievement. More to the current point: if the end is to substitute an automated exchange for the meeting of persons, then efficiency is at the very least an extremely high-risk achievement. Space permits only one further example.

    Banks now compete with each other to promise loan approvals in the shortest time--one hour is often advertised. They can do this because the necessary data about you and me is available online, and because software can analyze that data in the wink of an eye. It is no longer necessary for us to meet with a loan officer.

    What is lost? Almost everything that counts--most importantly, the opportunity for that officer, after sizing me up face to face, to make a judgment that might not yet be justified by the data of my past. Perhaps he recognizes in me a future trying to be born--a future that my current plight is conspiring to help make possible. A future that his gesture of trust, his willingness to put himself at risk, might encourage. A future that is not the past, and therefore is not available to the software.

    What ought banking to be about, if not assisting at the birth of new human possibilities? Even if you argue that such opportunities are few, surely making them occur as often as possible is what our lives are all about!

    Some people, presented with this example, say that the software is better because it prevents prejudice. This response tells us just how extreme the temptation to fall asleep has become today. We have adapted to the mechanisms of our interaction with almost complete forgetfulness of the human society those mechanisms were originally supposed to serve. For, after all, this response is like saying, "people suffer abuse in families, so we should abolish the family."

    To exchange our human potentials for the impartiality and infallibility of the machine is to give up on society's problems. I do not doubt at all that many of our most unyielding social dilemmas owe their stubborn persistence to the general feeling that they must be attacked first of all with programs--government, computer, or otherwise--and not with more wakeful behavior on our part. The existence of the programs enables us to see only mechanisms where we might have recognized personal responsibility.

    So let us hope that the next fifty years will produce the kind of wide- awake society that consciously masters its machines and insists on human exchange even through all the mechanisms--whether traffic lights, teller machines, or the brave new devices of the coming decades.

    Goto table of contents

    *** About this newsletter

    Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.

    NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .

    Current and past issues of NetFuture are available on the Web:


    To subscribe or unsubscribe to NetFuture:

    Steve Talbott :: NetFuture #39 :: January 29, 1997

    Goto table of contents

  • Goto NETFUTURE page