• Goto NETFUTURE main page
  •                                 NETFUTURE
                       Technology and Human Responsibility
    Issue #59       Copyright 1997 Bridge Communications      November 4, 1997
                Editor:  Stephen L. Talbott (stevet@netfuture.org)
                         On the Web: http://netfuture.org
         You may redistribute this newsletter for noncommercial purposes.
    *** Editor's Note
          NETFUTURE needs a new home
    *** Quotes and Provocations
          Computer-prompted Senate?
          Teaching Children To Stalk the Wild Information
          From Couch Potato to Desk Potahto
          Does Technology Set Us Free?
    *** Distributing Big Brother's Intelligence (Stephen L. Talbott)
          It was easier when we knew who the enemy was
    *** About this newsletter

    *** Editor's Note

    Tortoise-like, NETFUTURE has passed the 4000-subscriber mark. Insignificant, of course, by mainstream media standards, but the steady progress (mainly by word-of-mouth) is nevertheless satisfying. If I were not committed to preserving subscriber privacy, I would share with you a snapshot of the readership. NETFUTURE folks are key figures in academia (especially media and technology studies), policy-making, education, journalism, and of course, the high-tech industry. Perhaps most gratifying has been the number of computer hardware and software engineers who have written to me about their efforts to transcend the narrow terms of the standard computer-science education and bring a more human-centered perspective to their vocation.

    An immediate need: O'Reilly & Associates is discontinuing its list server. I have to find a new home for NETFUTURE as soon as possible -- preferably a stable one with a long life expectancy, a reliable list server, and first-class technical support. (Since there is only one posting every week or two, the load imposed by NETFUTURE is not heavy.)

    If your organization might be able to help, please contact me (stevet@netfuture.org). Thanks.


    Goto table of contents

    *** Quotes and Provocations

    Computer-prompted Senate?

    American readers doubtless noted the recent flap concerning personal computers on the Senate floor. Senator Diane Feinstein, defending the prohibition rule, argued that "when you're speaking on the Senate floor, you should be speaking from a lifetime of experience." Senator Robert Torricelli envisions the electronic notebook leading to "staff instructions on voting and the scripting of all remarks."

    The online crowd's insulted outrage was, of course, predictable. Brock Meeks, drawing upon the argument-settling powers of the word "information" (see NF #58), was able to dispose of the issue with a single, facetious sentence:

    God forbid our senators should tap into a wealth of information to make informed decisions. (Upside, November, 1997, p. 81)
    As if the problem were that our Senators and their overwhelmed staffs are information-deprived. And as if, in crucial decision-making, there were no place for the conversational engagement of unlike minds, navigating under their own power and consciously working through issues in the context of declared values, priorities, and meanings -- none of which, unfortunately, can be retrieved from a database.

    What I find most distressing in many discussions of this sort is the yes-or-no mentality underlying them. There's a wooden assumption that a single, correct answer exists in some pure, vacuum-packed "information space." But in human and social affairs, such an answer never exists. Everything depends upon the boundless context out of which the "answer" -- which is really the creation of a pathway into the future -- arises.

    Of course the Senators can reasonably choose to bring their laptops into the deliberative chamber. One hopes that such a decision would be made in full awareness of the trade-offs, and with a profound resolve to master the technologies about which they must legislate.

    And of course they can reasonably choose to ban computers. That decision, after all, could be motivated by unease concerning the superficiality of television-mediated politics, and by a healthy awareness of the potential for a yet further degradation of their essential business. Suppose they were to say, rather shamefacedly, "We don't yet have the power or means to resist the potential downward pull of new media." Would we scorn them? I, for one, would find my regard for them suddenly raised.

    Don't forget, either, why they lack the power and the means. It comes down to our susceptibility to the cosmetized gimmickry on our video screens during election season.

    Teaching Children To Stalk the Wild Information

    If you want a good measure of our culture's blind faith in technology as the solution to non-technological issues, just start counting the number of educators and policy makers who in effect are saying (without evident shame), "We must have found the educational answer -- after all, we're shelling out billions of dollars for the computers that will implement it. We're just not sure what it is yet."

    At a time when less than fourteen percent of teachers believe that the Internet improves student academic performance, one policy maker acknowledges:

    The reality is we haven't the faintest idea what really works in a classroom.
    That was David Shaw, chairman of President Clinton's Committee of Advisers on Science and Technology. The committee's recommendation? Keep wiring schools, and meanwhile put some of those billions into figuring out what works.

    My logical quibble will doubtless strike many as sheer Luddism, but how does one gain such confidence in having the answer ... without having the answer?

    Shaw's confidence has filtered down to at least some of the troops. Victoria Deardorff, an Oakland third grade teacher, had her students conduct a Yahoo search for "manatee." After some classroom confusion about results that included a Manatee Restaurant and the sheriff's office in Manatee County, Ms. Deardorff said,

    I think this can be a great tool for them. There's so much information out there. [Ah, the magic of information again.] But I'm just learning it myself.
    We can rejoice that Ms. Deardorff's third graders have finally come upon a treasure trove of information, and we can be confident that they will quickly bring her up to speed on the technology. Meanwhile, we can hope that President Clinton's committee, having discovered such a sure-fire, investment-grade answer for the nation's educational challenge, will eventually figure out what it is -- and let teachers in on it.

    (Quotations and news from New York Times, October 25, 1997, p. A1.)

    From Couch Potato to Desk Potahto

    For years George Gilder has been proclaiming an imminent cultural renaissance. We are, he is sure, passing from a television culture to a wired culture, and this means, happily, that the television moguls can no longer force us to revel in smut. The television industry
    ignores the fact that people are not inherently couch potatoes; given a chance, they talk back and interact. People have little in common with one another except their prurient interests and morbid fears and anxieties. Aiming its fare at this lowest-common denominator target, television gets worse year after year.
    Gilder says that even with fifty or five hundred channels, television remains a lowest-common-denominator medium, and we have no choice but to lend it our denominators. On the other hand, "you'd never go into a bookstore with only five hundred titles. In the book or the magazine industries, 99.7 percent of the stuff is by definition not for you, and that's what the Internet is like." Bookstores and the Internet are what he calls "first-choice" media -- you get exactly what you want. And when you can get exactly what you want, you will suddenly be born again.

    It's true that, given a million channels on the Internet, we'll find pages we want or need -- all the more as society transfers needed things to the Net. There are major transformations under way in this regard. But Gilder's effort to discern a cultural renaissance in this fact is misguided.

    His mistake is simple: he writes as if we had been chained to the television, with its reduced menu of choices, and now are being liberated. But the truth is that we had an unlimited range of choice all along. We did get what we wanted. We did not have to watch television; we preferred to do so -- strongly enough to abandon many worthwhile activities in favor of the tube. As a result, television re-shaped politics, entertainment, and the culture as a whole.

    So, yes, we will be able to take advantage of the valuable things on the Net, just as we could take advantage of valuable books during the television era. But this still leaves open the question: how will the overwhelmingly dominant uses of the Net (which are, incidentally, becoming more and more television-like) re-shape society?

    There is no reason to expect that the lowest-common-denominator aspects of our lives (whether expressed through television or the Net) will carry a different weight in today's overall cultural context than they did in the primary television era. Technology does not deliver us from ourselves -- a truth that Gilder is making a career of wishing away.

    Does Technology Set Us Free?

    In the New York Times Magazine's special issue on technology (September 28, 1997), staff writer John Tierney argues that, while technology cannot change human nature, it biases us toward becoming better people in a better world.

    One of his contentions is that computers, like all gadgets, will get simpler to use. Hypnotized by the evolution of particular features, he loses sight of the increasing complexity of context made possible by this evolution. So, in welcoming the automobile's conversion from an unreliable, high-maintenance machine to the relatively care-free device of today, he fails to ask whether the accordingly lengthened daily commute is in fact easier and less stressful now than in the bad old days.

    (The logic of this kind of oversight, which I have called the Great Technological Deceit, has proven irresistible to the cheerleaders of technology. See "Is Technological Improvement What We Want?" in NF #38.)

    As part of his argument for a blessed evolution of technology beyond its early limitations, Tierney remarks that

    the phone didn't become wholly civilized until people were freed to ignore it by the invention of the answering machine -- a contraption that was despised before coming to be regarded as a necessity.
    He should have added that the answering machine remains one of the most despised pieces of technology. And he might then have asked, How well are we mastering technology when the machines we despise become universal necessities?

    Tierney covers a lot of other ground in much the same, unreflective manner. For example, he assures us that technology is solving our environmental problems, decentralizing governments and defanging dictators, saving us from the couch potato syndrome (there's the obligatory quotation from George Gilder), shortening the work week, presenting us with an unprecedented agricultural abundance, and giving us more control over our lives, bodies, and genes.

    The problem running through it all is his unrelieved focus upon externals. Yet the decisive risk of technology has never been external. It has from the first been recognized as the risk of losing our souls.

    Tierney comes tantalizingly close to acknowledging the real issue, only to drive past it:

    Although new technology is often described as a Faustian bargain, historically it has involved a trade-off not between materialism and spirituality -- lugging water from the well was not a spiritually uplifting exercise for most people, no matter how much it might appeal to the Unabomber -- but between individual freedom and social virtue.
    Unfortunately, Tierney says nothing further about this trade-off between individual freedom and social virtue, except to deny that it applies in the Age of Information.
    The Internet may look like a dangerously anarchic world, but it's actually fairly similar to the ancient environment in which humans evolved to become the most cooperative, virtuous creatures on earth.
    All this has something to do with the way our Pleistocene brains are "naturally inclined" toward exchanging information on the Net, and leads Tierney to his deepest attempt to analyze online communication:
    A surprising number [of Net users] seem to be acting out of pure goodwill.
    We can be thankful for the goodwill, but is there nothing more to say about the complex social impacts of electronic, networked communication?

    As to Tierney's preference for freedom over "spirituality," two things need saying. The first is that, if lugging water from the well is not a spiritually uplifting exercise as such, neither is drawing water from the tap or, for that matter, doing the work that pays for the water system, appliances, sewage disposal, pollution control, and all the rest. Tierney has failed to see that what we do, conceived in outward terms, is never the critical thing, but rather how we do it and what it means to us.

    Helena Norberg-Hodge, who has spent many years in the Himalayan mountain state of Ladakh, writes,

    Tourists see people carrying loads on their backs and walking long distances over high mountain passes and say, "How terrible; what a life of drudgery." They forget that they have traveled thousands of miles and spent thousands of dollars for the pleasure of walking through the same mountains with heavy backpacks. They also forget how much their bodies suffer from the lack of use at home. During working hours they get no exercise, so they spend their free time trying to make up for it. Some will even drive to a health club -- across a polluted city in rush hour -- to sit in a basement, pedaling a bicycle that does not go anywhere. And they actually pay for the privilege. (Ancient Futures: Learning from Ladakh, Sierra Club, 1992, p. 96)
    The mountain tourists, like Tierney when he imagines lugging water, have unwittingly become alienated from their own activities -- an alienation in which the role of technology is surely suspect.

    The second thing is this: the freedom Tierney hails must itself be an inner, spiritual quality if it is to have any enduring virtue. Aleksandr Solzhenitsyn pointed to this quality when he wrote of the Gulag:

    From the moment you go to prison you must put your cozy past firmly behind you. At the very threshold, you must say to yourself: "My life is over, a little early to be sure, but there's nothing to be done about it. I shall never return to freedom. I am condemned to die -- now or a little later. But later on, in truth, it will be even harder, and so the sooner the better. I no longer have any property whatsoever. For me those I love have died, and for them I have died. From today on, my body is useless and alien to me. Only my spirit and my conscience remain precious and important to me."

    Confronted by such a prisoner, the interrogation will tremble.

    It is not that freedom is impossible without terrible loss. But the loss does strip away everything incidental, enabling us to recognize freedom's essence and the interior source of its power. Despite external circumstances, Solzhenitsyn, not his interrogator, was the truly free individual. No other power than this freedom can defeat tyranny. I would agree with Tierney that freedom is the decisive gift of technology. However, it is not a ready-made gift; it is the reward for our resistance to the invitations of the machine. It is the consequence of our struggle to raise ourselves above the machine. And the more this gift comes within our grasp, the more impossible it becomes to say that technology makes us better. Why? Because to the extent we become free, we determine ourselves from within, and therefore cannot be determined by technology from without.

    The case for pessimism about technology is not the mirror image of Tierney's optimism. It is not a matter of saying that the material circumstances of our lives have really worsened. No, the case for pessimism lies in the degree to which technology has blinded us to what it would mean for things to get better. We cannot in freedom surmount the challenge of our machines so long as we fail to recognize it.


    Goto table of contents

    *** Distributing Big Brother's Intelligence
    From Stephen L. Talbott (stevet@netfuture.org)

    I continue to put more of my book, The Future Does Not Compute: Transcending the Machines in Our Midst, online. The following excerpt is taken from chapter 25, "What This Book Was About." You will find the (rather long) complete chapter on my web site.



    Langdon Winner observes that "dreams of instant liberation from centralized control have accompanied virtually every important new technological system introduced during the past century and a half." He quotes Joseph K. Hart, a professor of education writing in 1924:

    Centralization has claimed everything for a century: the results are apparent on every hand. But the reign of steam approaches its end: a new stage in the industrial revolution comes on. Electric power, breaking away from its servitude to steam, is becoming independent. Electricity is a decentralizing form of power: it runs out over distributing lines and subdivides to all the minutiae of life and need. Working with it, men may feel the thrill of control and freedom once again.
    What Hart failed to notice, according to Winner, was that electricity is "centrally generated and centrally controlled by utilities, firms destined to have enormous social power" far exceeding what was seen in the days of steam (Winner, 1986: 95-96).

    Ellul makes a similar point when he notes how computers have made an obvious decentralization possible in banking (just consider all the ATM machines), "but it goes hand in hand with a national centralization of accounting" (Ellul, 1990: 111).

    I do not dispute the important truth in these remarks. But I believe we can also look beyond them to a more horrifying truth.

    The Externalization of Instrumental Reason

    The totalitarian spirit, many have assumed, always rules from a distinct, physically identifiable locus of authority, and propagates its powers outwardly from that locus along discrete, recognizable channels. That is, it always requires a despotic center and a hierarchical chain of command.

    But there is another possibility. Every totalitarianism attempts to coerce and subjugate the human spirit, thereby restricting the range of free expression so important for the development of both individual and community. Who or what does the coercing is hardly the main point. Whether it is a national dictator, oligarchy, parliament, robber baron, international agency, mafia, tyrannical parent, or no one at all -- it doesn't really matter. And what the computational society begins to show us is how totalitarianism can be a despotism enforced by no one at all.

    To understand this we need to recognize that the computer, due to its reflected, frozen intelligence, is both universal and one-sided. It is universal because the logic of intelligence is by nature universal, linking one thing unambiguously to another and thereby forming a coherent grid of relations extending over the entire surface of every domain it addresses. But at the same time whatever is "off-grid" is ignored. The computer pretends with extraordinary flexibility to do "everything," and therefore what is not covered by its peculiar sort of "everything" drops unnoticed from the picture. /1/

    Moreover, the intelligence we're speaking of is an embedded intelligence, operating in the machinery of our daily existence. Where this intelligence links one thing to another according to a universal "hard logic," so also does the physically constraining machinery. And yet, the whole basis of the computer's power derives from the fact that no one -- no one present -- is in control. The logic and the machinery are, at the level of their own operation, both self-sufficient and without any necessary center. If the sequence of mathematical statements in a strictly logical demonstration possesses no center (and it does not), neither does the elaborated, computational mechanism onto which the sequence is impressed.

    Ellul's reference to "a national centralization of accounting" is not inconsistent with this decentering. An accounting database and its associated software must exist somewhere, and we can easily imagine that someone controls access to it. But this is less and less true today. Databases can be distributed, and in any case, direct or indirect access to them may be spread widely throughout all the different, interrelated activities and functions of a business. It is then truer to say that the system as a whole determines access than that any particular person or group does. In fact, anyone who arbitrarily intervenes (even if it is the person "in charge") is likely to face a lawsuit for having disrupted an entire business and thousands of lives.

    Technologies of embedded intelligence inevitably tend toward interdependence, universalization, and rationalization. No clique of conspiring power-brokers is responsible for the current pressures to bring the telephone, television, cable, and computing industries into "harmony." No monopolistic or centralized power, in any conventional sense, decrees that suppliers connect to the computer networks and databases of the retail chains -- a requirement nevertheless now threatening the existence of small, technically unsophisticated supply operations. (Most of them will doubtless adapt.) Nor does anyone in particular create the pressure for digitization, by which the makers of cameras and photocopiers are waking up to find they are computer manufacturers. And as the amount of software in consumer products doubles each year (currently up to two kilobytes in an electric shaver -- Gibbs, 1994), no one will require that the various appliances speak common, standardized languages.

    Again, it is no accident that the introduction of robots -- difficult in existing factories -- leads to a more radical redesign of manufacturing than one might first have thought:

    A robot requires a whole series of related equipment, that is, another conception of the business. Its place is really in a new factory with automation and computerization. This new factory relates machines to machines without interference. (Ellul, 1990: 316)
    In this relation of machine to machine there is much to be gained. But the gain is always on an instrumental level, where we manipulate the physical stuff of the world. By brilliantly focusing a kind of externalized intelligence at that level we may, if we are not careful, eventually lose our humanity.

    Allow me a brief tangential maneuver.

    Man and insect

    Zoologist Herman Poppelbaum comments that the human hand, so often called the perfect tool, is in an important sense not that at all. Look to the animals if you want limbs that are perfect tools:
    The human hand is not a tool in the way the extremities of animals are. As a tool it lacks perfection. In the animal kingdom we find tools for running, climbing, swimming, flying etc. but, if man wants to use his hand to perform similar feats, he has to invent his tools. For this he turns to an invisible treasure chest of capacities he bears within himself of which the remarkable shape of the human hand itself appears to be a manifestation. A being with lesser capacities than man's would be helpless with such a hand-configuration. (Poppelbaum, 1993: 127-28)
    Endowed with nothing very "special" physically, we contain the archetypes of all tools within ourselves. The tools we make may be rigid in their own right, but they remain flexible in our hands. We are able to rule them, bending them to our purposes. The question today, however, is whether -- through the mechanization of intelligence -- we are entrusting that rule itself to the realm over which we should be ruling.

    Insects offer a disturbing analogy. It is Poppelbaum again who remarks on the absence of clear physical links to explain the order of an ant heap or beehive. The amazingly intricate unity seems to arise from nowhere. "Even the queen bee cannot be regarded as the visible guardian and guarantor of the totality, for if she dies, the hive, instead of disintegrating, creates a new queen." (Poppelbaum, 1961: 167)

    As a picture, this suggests something of our danger. Where is the "totalitarian center" of the hive? There is none, and yet the logic of the whole remains coherent and uncompromising. It is an external logic in the sense that it is not wakeful, not self-aware, not consciously assenting; it moves the individual as if from without. A recent book title, whether intentionally or not, captures the idea: Hive Mind.

    It is into just such an automatic and external logic that we appear willing to convert our "invisible treasure chest of capacities." But if our inner mastery over tools is itself allowed to harden into a mere tool, then we should not be surprised when the coarsened reflection of this mastery begins reacting upon us from the world. An important slogan of the day, as we saw earlier, bows to the truth: "what we have made, makes us."

    The slogan is not hard to appreciate. Living in homes whose convenience and mechanical sophistication outstrip the most elaborate factory of a century ago, most of us -- if left to our own devices within a natural environment once considered unusually hospitable -- would be at risk of rapid death. In this sense, our system of domestic conveniences amounts to a life-support system for a badly incapacitated organism. It is as if a pervasive, externalized logic, as it progressively encases our society, bestows upon us something like the extraordinary, specialized competence of the social insects, along with their matching rigidities. What ought to be our distinctive, human flexibility is sacrificed.

    From another angle: where you or I would once have sought help quite naturally from our neighbors, thereby entering the most troublesome -- but also the highest -- dimensions of human relationship, we now apply for a bank loan or for insurance. Not even a personal interview is necessary. It is a "transaction," captured by transaction processing software and based solely upon standard, online data. Everything that once followed from the qualities of a personal encounter -- everything that could make for an exceptional case -- has now disappeared from the picture. The applicant is wholly sketched when the data of his past have been subjected to automatic logic. Any hopeful glimmer, filtering toward the sympathetic eye of a supportive fellow human from a future only now struggling toward birth, is lost in the darkness between bits of data. Nor, in attempting to transcend the current system, can either the insurance employee or the applicant easily step outside it and respond neighbor-to-neighbor. The entire procedure has all the remarkable efficiency and necessity of the hive.

    So the paradoxes of power and powerlessness, of centralization and decentralization, are not really paradoxical after all. We can, if we wish, seek instrumental power in place of the freedom to achieve a distinctively human future. We can, if we wish, abdicate our present responsibility to act, deferring to an automatic intelligence dispersed throughout the hardware surrounding us. It scarcely matters whether that intelligence issues from a "center" or not. What matters is how it binds us.

    Whatever binds us may always seem as if it came from a center. Whether to call the automatic logic of the whole a center may be academic. The real question for the future is no longer whether power issues from many, dispersed groups of people or from few, centralized groups. It is, rather, whether power issues from within the fully awake individual or acts upon him from the dark, obscure places where he still sleeps. If it is exercised wakefully, then it is not really power we're talking about, but freedom and responsibility. But if it is not exercised wakefully, then centralization and decentralization will increasingly become the same phenomenon: constraining mechanism that controls us as if from without.


    1. This is related to what Joseph
    Weizenbaum called the "imperialism of instrumental reason."  The chapter
    called "Against the Imperialism of Instrumental Reason" in his Computer
    Power and Human Reason deals wonderfully with a number of the themes
    discussed here.


    Ellul, Jacques (1990). The Technological Bluff. Translated by Geoffrey W. Bromiley. Grand Rapids, Mich.: Eerdmans.

    Gibbs, W. Wayt (1994). "Software's Chronic Crisis." Scientific American 271, no. 3 (September).

    Poppelbaum, Hermann (1961). A New Zoology. Dornach, Switzerland: Philosophic-Anthroposophic Press.

    Poppelbaum, Hermann (1993). The Battle for a New Consciousness. Translated by Thomas Forman and Theodore Van Vliet. Spring Valley, N.Y.: Mercury Press.

    Winner, Langdon (1986). The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: University of Chicago Press.

    Goto table of contents

    *** About this newsletter

    Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.

    NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .

    Current and past issues of NetFuture are available on the Web:


    To subscribe or unsubscribe to NetFuture:

    Steve Talbott :: NetFuture #59 :: November 4, 1997

    Goto table of contents

  • Goto NETFUTURE page