• Goto NETFUTURE main page
  •                                 NETFUTURE
    
                       Technology and Human Responsibility
    
    --------------------------------------------------------------------------
    Issue #48       Copyright 1997 Bridge Communications          May 14, 1997
    --------------------------------------------------------------------------
                Editor:  Stephen L. Talbott (stevet@netfuture.org)
    
                         On the Web: http://netfuture.org
         You may redistribute this newsletter for noncommercial purposes.
    
    CONTENTS:
    
    *** Editor's Note
    *** Quotes and Provocations
          When Speeding Up Is of Doubtful Value
          Wow! (again)
          Chess and Symbolism
    *** Is Technological Improvement What We Want?  (Part 3) (Stephen L. Talbott)
          On building a global prison for ourselves
    
        Departments
    *** Letter from Des Moines (Lowell Monke)
          An unfunded mandate to teach children to read
    *** Announcements and Resources
          Conference: Ethics of Electronic Information
          Loka Institute and Citizen's Technology Panel
          Technological Determinism; Henry Perkinson; the Human Heart
    
    *** About this newsletter
    

    *** Editor's Note

    I had no takers on my inquiry several issues back, so I'll repeat it just this once: If your organization has a web-worthy PC or Mac (preferably with relevant peripherals) that it could donate to a nonprofit, tax-exempt organization, please get in touch. It looks like such equipment is going to be important for NETFUTURE's future.

    SLT

    Goto table of contents


    *** Quotes and Provocations

    When Speeding Up Is of Doubtful Value

    The following story would have made a fitting epigraph to my earlier piece on "technological aimlessness" (NF #45):
    At 40,000 feet the pilot of a 747 jetliner comes on the intercom and announces to the passengers, "I've got bad news and good news.

    "The bad news is that our navigational equipment has failed, we're lost, and we don't know where we're headed.

    "But the good news is that we've picked up a strong tailwind and are making record time."

    (Thanks to Joseph Weizenbaum for passing this along.)

    Wow! (again)

    From the New York Times (May 10) via Edupage:
    Microsoft chief executive Bill Gates told the more than 100 attendees of a "CEO Summit" convened in Seattle that they should plan to fulfill their "wildest dreams" because computing power will continue to increase rapidly in the years ahead."
    Well, I sat down with my wife last night and said, "Did you hear? CPU cycling time is going to reach a point soon where we can fulfill our wildest dreams!" She was strangely unmoved. We talked awhile, and finally she convinced me that poor Bill must not have a very exciting dream life. But there's more:
    Urging them to focus their thoughts less on how technology will change in two years and concentrate more on how it will change in ten, Gates encouraged them to build company information systems that would be as fast and responsive as "digital nervous systems."
    Actually, don't we already see digital nervous systems today? They're in all those television- and video-game-trained kids who figure in the current medical epidemic of hyperactivity.

    Chess and Symbolism

    First there was Dolly, then there was Deep Blue.

    What strikes me is the exaggerated degree of public excitement provided, in both cases, by non-events. The wrinkle in cloning technology that produced Dolly hardly represented a dramatic new direction in genetic engineering. The designer-gene folks were already fiddling with the DNA of human embryos, and had in fact cloned such embryos in the laboratory (discarding them before they had advanced very far). And, on a different front, most people in the United States regularly consume food products manipulated in one way or another by genetic engineering, without a second thought.

    Yes, there are huge issues here, but they are pervasive issues and didn't need Dolly in order to become critically important.

    As for Deep Blue's "historic victory," it is odd to hear cries of angst from those who happily subordinate their artistic gifts to the constraints of computer graphics, or who conduct 75 percent of their human conversation through email, or who manage their businesses primarily from a spreadsheet. One hopes that, at least on an unconscious level, the angst is a recognition of our own state -- a dim awareness of what we have been voluntarily sacrificing of ourselves.

    In any case, emotion-packed symbolic events seem to be what we and our media thrive upon. It is too bad, however, that the potential educational value of such events is lost in the attempt to milk them for maximal impact and commercial value. The symbol never gets connected in any profound way to the deeply embedded issues it symbolizes -- which is to say that it ceases to be a symbol, and becomes an idol instead.

    A startling example of this trivialization of the symbol occurred recently when Raichle Molitar (distributors of Fischer's Revolution Skis) announced a Che Guevara look-alike contest. Apparently worried about the cold-blooded ruthlessness to which the historical Guevara was susceptible, a company spokesman assured the world: "We felt that the Che image -- just the icon and not the man's doings -- represented what we wanted: revolution, extreme change."

    But what makes an icon an icon is nothing other than the deeper reality seen through it -- what it stands for. Without the reality, the icon is merely a blank surface, a symbol of our own emptiness.

    Both genetic engineering and computer engineering are now conceived as offspring of the informational age. Yet it is significant that this era -- the "you are there" era of television, and the global communications era of the computer -- should pre-eminently have produced the non-event, the image that images nothing, the powerful non-symbol that does no more than send a temporary, uncomprehending jolt through a passive public. We should look for the connection between our passion for information and this hollowing out of the meaningful symbol, but instead we cry out on behalf of ourselves and our children, "Let us have more information."

    The chess match could serve as a powerful symbol, but we seem to be getting it wrong. My local paper (Albany Times Union) reported that "the science fiction nightmare came true Sunday." But if there is a nightmare, it is the PC sitting on the desk in front of you and me. Actually, though, it is even closer than that. The "titanic struggle between human and machine" (Times Union again) occurs first of all within us. It is between the part of us that is content to compute -- having created the computer as a partial expression of our own nature -- and the part of us where we are fully present to ourselves and to each other.

    Presence of mind -- knowing what we think and feel and will, and why, and taking responsibility for it in a social context -- this capacity will either grow in us, or else it will lose out. In either case the "threat" of Deep Blue as a frontal antagonist will not matter very much. What matters is our deference to information and computing power on the one hand, and the corresponding loss of meaning and human presence of mind on the other.

    The chess match in Manhattan more aptly symbolizes our self-abnegating worship of the machine than our victimization by it.

    SLT

    Goto table of contents


    *** Is Technological Improvement What We Want? (Part 3)
    
    From Stephen L. Talbott (stevet@netfuture.org)
    

    In part 1 of this series (NF #38) I argued that the opportunity to make software more friendly is also an opportunity to make it unfriendly at a more decisive level. For example, improved voice recognition software in telephone answering systems, while overcoming some of the present klutziness, will also enable companies to turn more of their callers' important business over to software agents. Our failure to reckon with this is at the heart of what I called the Great Technological Deceit.

    In part 2 (NF #40) I considered the evolution from low-level to high-level programming languages, and suggested that here, too, we can see the marriage of greater technical reach and deepened threat.

    Now I would like to look broadly at certain characteristics of intelligent systems in general.

    
    

    ON BUILDING A GLOBAL PRISON FOR OURSELVES

    
    A program is an elaborate logical structure that we typically map to some
    structured aspect of our lives.  The software engineer today can perform
    this mapping with remarkable subtlety and sophistication.  What is not so
    often noticed is the extent to which, once the program is up and running,
    an active mapping in the reverse direction occurs:  we end up adapting
    part of our lives to the program's logical structure.
    

    This is worrisome. The program just is what it is, and remains content to run ceaselessly in the same logical grooves. The essence of life, on the other hand, is continually to transform itself from within, outgrowing and shedding old skins, and occasionally undergoing radical metamorphosis. Even the rigid shell of the acorn must crack and give way to the incarnating vision of an oak stirring within the tender seedling.

    Intelligent Systems Demand Universal Rationalization

    No computer program absolutely compels our submission to its structuring of our activity. Nor does it prevent us from subsequently modifying the program logic. But, as the Year 2000 Problem reminds us, even seemingly minor aspects of software, once well-entrenched, can be hard to change later. Meanwhile, digital logic extends its grid-like filaments ever more finely through society.

    Of course, other technologies also present us with effects that are difficult to reverse. (Just think of the network of roads and highways.) But intelligent systems are distinctive in their ravenous urge toward aggrandizement. They always exhibit certain absolute and universal pretensions. They "hunger" for mutual articulation of their parts at the most global level possible. It's the very nature of formal reason and logic to demand global consistency -- it's that or else let the different subsystems fall into a grating disharmony. In the end, your toaster, stereo, computer, camera, and automobile will all be articulated into a single, intercommunicating system.

    This need for universal rationalization is why, as Jacques Ellul points out, it's harder to introduce robots to an existing factory than to organize a new factory around them. (1) In the same vein, historians Martin Campbell-Kelly and William Aspray remark that, whereas older office appliances (typewriters, cash registers, filing systems) could be "dropped into" an existing office without changing the underlying organization, the matter was altogether different with the punched-card machine and the computer: "the business's entire information system had to be reconfigured around the machinery." (2) Likewise, small wholesalers are being required, sometimes at prohibitive expense, to tie into the computer networks and databases of the retail chains.

    Thus, the web is spun ever more finely, and extends its filaments outward. In its own terms, this process knows no natural stopping place.

    It's hard to open a paper or magazine today without finding testimony to the demand for universal, single-system rationalization of the social processes we have adapted to digital logic. For example, the April 26 Economist carries a story about Proton, Belgium's ambitious experiment to establish an electronic-purse smart card. The author of the story writes,

    Proton ... faces complaints about parochialism. Its chip can hold just one currency. Mondex and Visa Cash, a rival run by a huge bank consortium, can store value in up to five different currencies ... Then there is inter-operability. If it is to become a global force, Proton must link its international members through an integrated payments system that allows, say, a Belgian customer to spend his e-cash in Switzerland .... No matter how impressive Proton's technology, ceding independence may be its only hope of plugging into a global network.
    Walter Wriston, the former chairman and CEO of Citicorp/Citibank, opines that
    "Open" has become the pretty girl at the party. Nobody quite knows what open means, but the world is demanding that systems talk to each other. (3)
    We can, I think, rightly believe that the weaving of planetary society into a global unity is an inescapable necessity for our day. But the unity of a logical system and the unity of an organism are very different things, and it is not at all clear that the social organism can remain healthy once it is bound securely by the iron logic of technology.

    Complex Technological Systems Are Brittle

    Every rationalized system erected upon logic becomes brittle just to the extent its drive toward interdependence and universality is fulfilled. Think of a web of lies, where the overriding aim is to maintain logical consistency as proof against detection of wrongdoing. The more elaborate the web, the greater the likelihood of a fatal inconsistency, and the greater the difficulty in patching over rents in the fabric.

    A similar limitation has afflicted expert systems, which, despite their early promise, eventually succumbed to their own brittleness. As the dean of software engineering, Frederick Brooks, Jr., put it, a point came where the developers of expert systems suffered a "rude awakening":

    Somewhere in the neighborhood of 2,500 to 3,000 rules, the rule bases become crashingly difficult to maintain as the world changes. (4)

    We need to make a crucial distinction, however. There is no necessary brittleness in logically elaborated systems as such. A computer can take a consistent system of logical axioms and mechanically adduce valid propositions without end. The result may not be very interesting, but maintaining consistency is no great challenge.

    The brittleness arises when we try to correlate the logical system with reality. The failure of complex expert systems occurs, Brooks noted, as the world changes. In other words, the more faithfully we extract the logical structure from a real-world situation and impress this structure upon computational machinery -- that is, the more detailed and successful the mapping from world to machine -- then the more hopeless the task of adapting the machine to changes. Or, looking at it the other way around: the more we adapt ourselves to all the ramifications of a logical system, the more that system will bind and chafe us as we attempt to grow. (5)

    This helps to clarify the common notion that computers introduce a radical new principle of flexibility into otherwise rigidly constrained, mechanical processes. Two things need saying here. First, it is not true that there is a new principle. Old-style machines, from the loom to the automobile, always tended to gain a more finely calculated and adjustable range of capabilities as their designs grew in maturity and complexity. Computers simply continue a movement in the same direction -- and accelerate it remarkably. But they remain in the same game.

    Second, the "flexibility" available in this game is merely an ever closer mapping between the machine and a particular bit of the world that is itself conceived as a fixed and determinate machine. This flexibility is, in other words, exactly what made expert systems increasingly successful up to a point -- and then hopelessly brittle as the world changed. This sort of success, built upon generalization and abstraction (which is the only sort of success open to the computer), cannot keep up with the qualitatively determined transformation that constitutes the progressive inner realization and outer manifestation of a self or community of selves. Such transformation arises from the meanings in our lives, and these meanings -- qualitatively lived and deepening realities, as opposed to static linguistic products -- are not programmable. (6)

    A Tightening Web

    That the machine-world has been tightening about us like a skin is a fact that strangely escapes our attention. Yet the individual who spends many of the productive hours of his day sitting in front of a screen, expressing himself through mouse and keyboard, occupies the terminus of a remarkable evolutionary development. For the first time in history we have accepted, without any clear limit, the mechanical mediation of our interactions with the world. It is as if the 19th-century factory had contracted more and more about the individual worker, becoming uncannily subtle and transparent, until the worker, who at first had rebelled within himself at the machine's inhumane depredations, finally yielded himself to what he could no longer even see -- and pronounced it enjoyable.

    That we have not yet awakened to the issues this raises is evidenced by the entrenched conviction that all worries about software can be addressed through improvements to the software. If we could just make the intelligent machinery around us more flexible, more perfectly adapted to our needs of the moment....

    But this, as we have seen, is the Great Technological Deceit; as long as we are in its grip, our plight can only worsen. The greatest potential for disaster exists, after all, where the machinery has grown to fit us like a skin. The closer the fit, the more radically it constrains us. No straitjacket would prove more effective than my own skin, if it did not grow organically from within.

    This is not to say that we should refuse to improve our software. But it is to say that until we recognize the huge challenge on our side as we confront this software -- a challenge that becomes more acute with every technical upgrade -- we will be losing ground in the essential struggle.

    The growing from within, the enlarging or shedding of skins, the unexpected metamorphosis -- these must be accomplished by us. The effort becomes greater in direct proportion to the degree our inner movements have been smoothly and unconsciously "entrained" by the efficient software with which we interact.

    I have spoken at various times about particular dangers of this entrainment in relation, for example, to banking software, decision support systems, crime management systems, email, and various other applications. But the fact appears to be that most people have a difficult time actually sensing any of the more profound risks posed by the increasingly ubiquitous and invisible threads of digital logic that shape our lives.

    
    1. Ellul, Jacques, The Technological
    Bluff, transl. by Geoffrey W.  Bromiley (Grand Rapids, Mich.:
    Eerdmans, 1990), p. 316.
    
    
    2. Campbell-Kelly, Martin and William
    Aspray, Computer: A History of the Information Machine (New York:
    HarperCollins, 1996), pp. 152, 175-80.
    
    
    3. Wired, October, 1996.
    
    
    4. Communications of the ACM,
    March, 1996.
    
    
    5. The problem is not restricted to expert
    systems, but applies to every attempt to map a purely formal system (such
    as a computer) to the world.  For some preparatory circling around these
    issues, see the chapter, "Can We Transcend
    Computation?" in Talbott, Stephen L., The Future Does Not Compute:
    Transcending the Machines in Our Midst.
    
    
    6. Again, see the chapter referred to in
    the previous note.
    

    Go to part 1 of this series
    Go to part 2 of this series

    Goto table of contents


    *** An unfunded mandate to teach children to read
    
    From Lowell Monke (lm7846s@acad.drake.edu)
    
                                                        Letter from Des Moines
                                                                   May 5, 1997
    
    On the agenda for the April 15 Des Moines School Board meeting was a final vote on the district technology plan. (I reported on financial consequences of the plan in NF #43.) A couple of interesting issues emerged from the discussion. First, a board member pointed out that the consulting firm hired by the district two years ago to advise on technology planning had said our computer support costs would be nearly five times the cost of the equipment itself over 5 years of use. (This seems high to me, but if you take into account all of the salaries of support personnel, cost of housing the support, training, and so on, then the figure might not be too far off.) What struck me is that at least this one board member recognizes the enormous expense attached to buying and supporting computer technology. That's progress.

    For the the next five years we're going to be buying thousands of computers with the $2 million a year the state is throwing at us. In NF #43 I discussed how many more technicians this should mean. But the district budget, which was approved April 1, does not provide any increase in technical support costs. Being essentially broke, the district is holding the line on all "discretionary" expenditures. But if the school board member is right, then to make these tools work properly we should expect to spend about the same amount per year per computer on support as we do to buy them. Where that additional $2 million next year will come from, or $4 million the following year, or $6 million the year after that (at which time the figure may stablize as we begin to retire as many machines as we buy) has not been determined. But we can be sure that once we get the machines in the classrooms there will be strong pressures exerted to keep them working lest they become desk ornaments and the district be accused of wasting resources.

    The second issue seems to be related to this. The district is adopting a new reading program this year for grades K - 8. It will cost almost $900,000, paid to the publishing company whose materials were selected. This came up for approval April 15 as well. The same board member asked if we could save some money by putting off buying the software that is associated with the texts. He suggested evaluating the program after a year to see if it was working sufficiently well without the software. The publisher's representative strongly objected to this (surprise!), stating that the software is an integral part of the reading program and to leave it out would cripple the whole program. The board approved buying the software.

    It seems to me that my district has fallen victim to two varieties of unfunded mandates. The first -- wonderfully ironic given how the states howl when the feds do this to them -- is the state's mandate to buy computers without putting up any money to help with support. The second is the textbook company's integration of software programs into their reading curriculum in such a way that it forces the adopting school to provide computers. (I have been told by a member of the adoption committee that this is now the norm with reading programs.) In our case it means that close to 1000 computers have to be purchased just to get one into every classroom using the reading program -- which means that the peripheral devices needed to operate one aspect of this reading program will cost more than all the text materials and resources of the program itself.

    We can't afford to buy a computer for every classroom this year even with the special state money. It's not at all clear how those poor teachers who have to go without for a year or two will be able to teach their children how to read. It may turn out that many of their students will suffer terribly and end up in a remedial program, which, because it gets special federal funding, will be able to get lots of computers to catch them up again.

    Lowell

    Goto table of contents


    *** Announcements and Resources

    Conference: Ethics of Electronic Information

    The University of Memphis (Tennessee) is sponsoring a conference on "The Ethics of Electronic Information in the 21st Century," to be held September 26 - 28, 1997.

    The conference web site, with call for papers, is at http://www.memphis.edu/ethics21/index.html. The contact for general information is Tom Mendina (tmendina@memphis.edu); phone: 901-678-4310; fax: 901-678-8218. Here's a blurb from the conference organizers:

    Recently historian Neil Postman warned against the naive belief that information is "an unmixed blessing, which through its continued and uncontrolled production and dissemination offers increased freedom, creativity, and peace of mind" (Technopoly, 1992, p. 71). Indeed, information and information technology raise a host of difficult issues:

    ** Who will be authorized to have access to the plethora of information that is generated by computers in the 21st century?

    ** Will privacy, that most revered of American values, be passe, given the power of computers and the invasiveness of information bureaucracy and technology?

    ** Will the possession of information mean riches for the possessors, and will those possessors of information inevitably be the rich nations and neighborhoods of the earth?

    ** Who will own information, and who will be barred from access to information? How will copyright be administered on the Internet?

    These are only a few of the myriad of questions and concerns that occur to practitioners in a variety of professional fields.

    Loka Institute and Citizen's Technology Panel

    Many of you will remember the multi-part interview we did with Richard Sclove, director of the Loka Institute (NF #6, 7, 8, 10, 15). He spoke a great deal about bringing citizen participation to the formulation of technology policy. As a result of pioneering efforts by Sclove and the Institute, a first-ever citizens' panel convened for a few days in April to consider telecommunications issues and make policy recommendations.

    The event seems to have been quite a significant success. You can find out the details at the Loka web site or by sending email to loka@amherst.edu.

    Technological Determinism; Henry Perkinson; the Human Heart

    I have put up three new papers on my web site:

    "On Being Determinedly Literate"

    A discussion of technological determinism with specific reference to Walter J. Ong's Orality and Literacy. (Delivered at an Ong seminar sponsored by the Department of Communications and Media Studies, Fordham University.)
    "Aversion to Risks -- Or Loss of Meaning?"
    A critical response to Henry J. Perkinson's No Safety in Numbers: How the Computer Quantified Everything and Made People Risk-Aversive. I argue that the loss of meaning is a more fundamental factor than risk-aversion in understanding the computerized society. (Delivered at the inaugural annual conference of the New Jersey Communication Association.)
    "Between Discordant Eras"
    Reflections upon the nature of the human heart. When William Harvey began dissecting animals and observing the heart at the moment it ceased moving, what ancient knowledge of the human being was lost? Can we possibly retrieve any of that knowledge? Clearly it will not be easy.
    SLT

    Goto table of contents


    *** About this newsletter

    Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.

    NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .

    Current and past issues of NetFuture are available on the Web:

    http://netfuture.org/

    To subscribe or unsubscribe to NetFuture:

    http://netfuture.org/subscribe.html.
    Steve Talbott :: NetFuture #48 :: May 14, 1997

    Goto table of contents

  • Goto NETFUTURE page