NETFUTURE Technology and Human Responsibility for the Future -------------------------------------------------------------------------- Issue #38 Copyright 1997 O'Reilly & Associates January 16, 1997 -------------------------------------------------------------------------- Editor: Stephen L. Talbott (stevet@netfuture.org) On the Web: http://netfuture.org You may redistribute this newsletter for noncommercial purposes. CONTENTS: *** Quotes and Provocations Insecure Little Old Ladies I and My Genes Phone Answering Systems in Retreat? *** Is Technological Improvement What We Want? (Stephen L. Talbott) Confronting the Big Lie *** About this newsletter
Well, no, I got that just a little bit wrong. What we're actually seeing is a continuing series of break-ins afflicting high-profile computer systems, where the culprits are indeed prone to announce, "I did it for your own good." A personal sacrifice, it seems, in the interests of improved security. After all, somebody's got to do it.
Computers seem to invite this nonsense. A small point for reflection: the purse is attached to a flesh-and-blood arm. It doesn't take a whole lot of moral imagination to sense--and sympathize with--the personal consequences of a missing purse and a wrenched arm. The computer, on the other hand, is attached to...what? Certainly nothing very concrete. The more we adapt ourselves to technology, the greater the imaginative effort demanded of us if we are to recognize the personal implications of our actions, vague and distributed as those implications tend to become.
Our adaptation to technology will surely continue. Where is the compensatory training of our imaginations occurring?
hands out glossy compact-disc cases to visitors. The cases are inscribed with their names, a picture of a DNA double helix and the words of the Delphic oracle: "Know thyself." Within a few years, he says, you will be able to hold up a compact disc and say: "That's me." The three billion bits of data that make up a human genome could be stored on a single disc. (Economist, Jan. 4, 1997)And here I always thought my Net persona was the real me! In any case, Kinsella is not likely to print on his disc jacket this remark by geneticist J. S. Jones:
The world's most boring book will be the complete sequence of the human genome: three-thousand-million letters long, with no discernible plot, thousands of repeats of the same sentence, page after page of meaningless rambling, and an occasional nugget of sense--usually signifying nothing in particular. [Nature, 1991, vol. 354, p. 323]Come to think of it, that does sound rather like my Net persona.
I've never been prone to conspiracy theories, but it seems a little more than coincidental that some of the people who are now giving us maps of the human genome are the same ones who, several decades ago, gave us a team of monkeys typing away at the works of Shakespeare. Those monkeys must have been producing mountains of output in the meantime, and my sneaking suspicion is that what the geneticists are trying to foist upon us as the personal essence of our selves is really just a series of failed drafts by the monkeys. Haven't evolutionary geneticists been hinting at this all along by telling us that monkeys are where our genetic codes come from?
Moreover, Jones seems to offer a sly, knowing nod in our direction, for his phrase about genetic codes "signifying nothing in particular" eerily recalls this definition of life offered by Shakespeare:
...it is a taleSo there you have it. But every good conspiracy theory should be as all-embracing as possible, so I don't feel too bad about wondering whether the swelling document warehouses of the Primate Shakespearean Project might also have been leaking wholesale out onto the Web. You must admit that it would explain a lot. And haven't the fringe zealots like Minsky and Moravec been talking for years now about downloading their own genetic codes, while also mumbling obscure incantations over self-replicating data structures?
Told by an idiot, full of sound and fury,
Signifying nothing.
In what presumably struck the Digital managers like a revelatory bolt from the blue, one of the company's consultants explained that
it can get so complex to anticipate all the different requests and options that a caller can have. A person answering the phone can be much more flexible than a preprogrammed system.The report also notes that the number of inbound calls (which Digital pays for) has now been reduced. "With the automated system...the same customer might call several times with the same problem." Why? As the consultant put it, "So many customers were irritated, they were hanging up."
Automated systems will doubtless work more successfully for some purposes than for others. But I have long wondered about the efficiency of these things even in many situations where they look good from the standpoint of the company installing them. When one full-time operator is exchanged for an automated answering system, then--ignoring capital costs, system maintenance, employee training, and all the rest--you could say that one staff position has been eliminated. But apparent efficiency for the company is not necessarily overall efficiency for society. It's easy to imagine that the hundreds of callers who previously went through the operator now "pay" in combined added call-routing time more than the equivalent of one full-time position.
And, of course, it's not as if our efficient company will be wholly spared its own dose of the inefficiency it imposes on its customers. Presumably, company employees do their own share of calling to other businesses--and their own share of waiting.
There are, of course, many other factors to take into account, and I'm no economist. Maybe somebody out there can point to some meaningful studies. But, in any case, there are other, much more vexing issues about the entire range of intelligent systems, including telelphone answering systems, and they are the topic for the featured article in this issue.
SLT
In the very first issue of NETFUTURE I wrote a piece called "The Fundamental Deceit of Technology." Taking telephone answering systems as my example, I suggested that technical improvement of these systems inevitably deepens the threat to their users.
I have since become more convinced than ever that the truth in this is fundamental to any social understanding of technology today. Moreover, the principles at work can be grasped and elucidated. In what follows, I have stolen a couple of paragraphs from that original article, and then gone on to begin creating a much larger context. It's only a start, however, and I expect this first installment to initiate an occasional series.
IS TECHNOLOGICAL IMPROVEMENT NECESSARILY GOOD FOR US? Confronting the Big LieIf there is ever a populist uprising against the co-option of human functions by mechanized intelligence, surely it will be provoked by the stupidity of telephone answering systems. Or, at least, that's what I thought the last time I found myself pushing buttons in tune with some anonymous programmer's infinite loop--a not-so-serendipitous journey, incidentally, that my pocketbook was funding.
But, hope as we might, the rebellion will not happen, for a simple reason: most of us have been convinced by what may be the most effective lie of all time, namely, the lie that the technology effects we worry about are getting better. In our devotion to the lie, we are like the condemned man with a noose around his neck, who remained convinced even as he fell through the trap door that the experience would fix his long-standing neck problem. He was right, in his own fashion, and so are we.
In a sense, this is true. When I call a business in the future, the options will be more numerous, and I'll be able to negotiate those options with voice commands more complex than "yes" and "no."
But this is to ignore an obvious fact about the new capabilities: their reach will be extended. Where primitive software eventually routed you to a human operator, the friendlier version will replace the operator with a software agent who will attempt to conduct a crude conversation with you.
So the earlier frustrations will simply be repeated--but at a much more critical level. Where once you finally reached a live person, now you will reach a machine. And if you thought the number-punching phase was irritating, wait until you have to communicate the heart of your business to a computer with erratic hearing, a doubtful vocabulary of 400 words, and the compassion of a granite monolith!
In other words, the technical opportunity to become friendlier is also an opportunity to become unfriendly at a more decisive level. This is no accident. The technical improvements we apply within the restricted arena entail exactly the sort of broader reach that carries them beyond this arena. Programs that do a better job recognizing spoken words like "one" and "two" are almost certainly based upon technology that we can now apply, if only clumsily, to a much wider range of speech.
But the lie is easy to underestimate. It may already have ensnared us through the wording of the preceding paragraph. Aren't things at least getting better? We can imagine an objector arguing like this:
You say that software improves within a limited sphere only by reaching beyond that sphere, where the consequences of misapplication are even more critical. But you forget that within the original sphere things really do improve, and this domain of satisfactory performance steadily widens at each successive step. By always looking at the outer edges where the software still lacks maturity, you form a needlessly pessimistic picture of the overall progress.But, no, this is wrong-headed. The objector, who lives more or less assertively in all of us, is shuffling between two altogether different issues. Certainly the technology is getting more sophisticated--no one would deny that. But my frustration on the telephone was not, in the first instance, a frustration with the state of the technology. What bothered me was an artificial disruption of the normal potentials of human exchange. Yes, an ill-considered use of technology was the cause of my discomfort, but what I wanted, in a direct sense, was relief from the disruption, not technical advance. And if the technical advance prepares the way for a yet more critical barrier to human exchange--well, forgive me if I do not call this progress.
So what we see is a vicious and endless cycle: technical progress comes between us and certain of our expressive powers, and we complain. The complaint is met by an honest assurance that the responsible technology is getting better--which we all can see is true--so that the remaining, muted complaints are dismissed as Luddite. Never mind that the improvements at issue will move the spear point of the complaint yet a little closer to the throbbing heart of the human condition.
But, again, the problem was not experienced directly as a lack of computational power; it was felt, rather, as inconvenient delay, lost personal time, the inaccessibility of "cutting-edge" software, and the difficulty of working with awkwardly performing tools. No technical advances are ever likely to alter the fundamental shape of these problems, because the advances and our frustrations lie on separate planes.
In fact, as long as we are driven to desire the latest technology for its own sake (which is very much part of the human side of the problem), memory improvements and all the technical innovations they stimulate can only worsen our situation: the pace of change accelerates, new inventions proliferate, and every cutting-edge toy we play with is now twelve months instead of twenty-four months away from the inadequacies of its obsolescence. Clearly this shrinking time interval tells us more about our prospects for satisfaction than does the increasing density of integrated circuits.
What we forget is that the arms race between the powers of information proliferation and the powers of information management is an endlessly escalating one. The logical finesse with which we manage information is the same logical finesse that generates yet more information and outflanks the tools of management. Software agents are quite as capable of mindlessly flinging off information as of mindlessly collecting it.
Surely there is only one escape from the mindlessness: to realize that the essential contest is not between information management and information inflation, but between the obsession with information (well managed or otherwise) and the habit of quiet reflection. It is not an overload of information so much as a deficit of meaning we suffer from, not a lack of proper filters so much as the loss of mental focus--an inadequate power of sustained attention to what is important.
The technical advances of the past decades have not perceptibly improved our position. Quite the opposite: the sheer abundance of this success requires from us an even more heroic resistance to the temptation of mental scattering. We must work ever harder to prevent the attenuation of the threads of meaning beneath the accumulating weight of undigested information.
This, then, is the Great Technological Deceit. Confusing the technical and human levels of a problem, we assure ourselves that technical advances will improve the situation, whereas in fact they ensnare us ever more securely. There is a kind of rampaging technological aggrandizement at work here, and we have not yet shown, as a society, that we have a clue about managing it.
Copyright 1997 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.
NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .
Current and past issues of NetFuture are available on the Web:
http://netfuture.org/
To subscribe or unsubscribe to NetFuture:
http://netfuture.org/subscribe.html.Steve Talbott :: NetFuture #38 :: January 16, 1997