NETFUTURE Technology and Human Responsibility for the Future -------------------------------------------------------------------------- Issue #32 Copyright 1996 O'Reilly & Associates November 10, 1996 -------------------------------------------------------------------------- Editor: Stephen L. Talbott (email@example.com) On the Web: http://netfuture.org You may redistribute this newsletter for noncommercial purposes. CONTENTS: *** Editor's note Locking Ourselves in to Standards The Net As Centralizing Force *** Arguing about privacy (Phil Agre) Privacy is not secrecy *** Response to Agre (Stephen L. Talbott) A healthy private sphere requires a healthy public sphere *** About this newsletter
"Change everything." Do these words ring a bell in your mind? What strangely re-echoes through my mind are those complaints about "customer lock-in" from the bad old days before standards. The problem then was that one could not shift particular software or hardware purchases from one vendor to another without replacing entire systems; you had to change everything.
The problem today is...well, pretty much the same thing, except that now it operates on a grander scale. Where once we were bound to the products of a particular company--and before that to individual product lines within a company--now we are increasingly indentured to industry-wide and even global standards from which there is no apparent escape even at great expense.
Yes, there are advantages to standards and open systems. We gain an ability to conduct business ever more widely, and in some terms at least, more efficiently. But what brings us this advantage is a technical system whose underlying logic becomes steadily more universal, more finely rationalized, more precisely articulated--and therefore harder to escape once the logical cement "sets". The "everything" in "you'll have to change everything" reaches toward...everything.
So we escape our parochial bonds only to find ourselves locked in more decisively at a cosmopolitan level. Unfortunately, while the local constraints were easy to notice--if only because the competing vendors advertised side by side in the trade rags--the global ones begin to seem like necessity. "That's just the way the world is." We lose sight of the choices that have led us step by step to close off future options.
This is actually that same "fundamental deceit of technology" to which I pointed in NF #1. The issue in that earlier article was the improvement of telephone answering systems by means of more sophisticated voice recognition software. I argued that "the technical opportunity to become friendlier is also an opportunity to become unfriendly at a more decisive level."
My contention was that an ever more profound frustration of human purposes is the currently prevailing law of technological development, underlying innumerable claims of progress. This is not particularly easy to demonstrate, and is complicated by widespread confusion of the various levels at which dramatic gains as well as losses occur. Nevertheless, a lot of thought has convinced me that my original argument was sound--and that a general recognition of the issues is critical for society.
A few editions down the road I will publish here a multi-part essay I have written in an attempt to identify the working of the technological deceit across a range of technologies.
Back in the Later Middle Ages, around 1994, I often found myself lectured by experts who asserted that great bureaucratic institutions, government and corporate alike, would necessarily and inevitably disintegrate in the world of the Internet. It's almost 1997 now, and as the months go by I find it ever harder to remember why this great disintegration was supposed to take place. If evidence counts for anything, we are actually living in an unprecedented era of concentration and centralization. ABC has now become the Disney Infomercial Channel, and British Telecom is buying MCI. And the more I learn about Internet economics, the more the Internet seems like a veritable engine of monopoly-creation.You can check out Agre's defense of this position here.
My own inclination is not so much to dispute Agre's interesting remarks as to add to them a suggestion that the terms of the old centralization vs. decentralization debate are being left behind. There is, as I argue in chapters 8 and 25 of The Future Does Not Compute, a kind of decentralization that can be fully as tyrannical as any form of centralization. We can be ruled by Systems that lack effective centers. (The next time you are in an environment where "they" are being blamed for pervasive ills, ask yourself who exactly "they" might be, and how they have legislated these ills.)
In fact, nearly all large companies like Disney and British Telecom are themselves testimony to a weakening of the center; they are driven much more by market necessities than by conscious choice about what is worth doing. Such choice, if it were were attempted, would prove practically impossible to carry out without whole organizations crumbling. Executives have little freedom in the matter, and can hardly be said to direct the huge institutions that employ them. Direction is given instead by an increasingly diffuse logic that the executives, too, must serve. The president of Shell Oil was once asked how it felt at the top of his company. "It's like sitting on the back of a huge brontosaurus and watching where he's going."
This is not deny the highly visible perquisites of power. But those perquisites are one thing, and the ability to shape the fundamental logic of the modern commercial juggernaut in any meaningful sense is quite another.
Two very different issues are commonly confused. Without a doubt tremendous, earth-shaking power surges through particular nodes of the social network. But this power does not necessarily equate to being in control. (I can conduct a lightning bolt without having much to say about its path of continuation.)
Where the effective control does come from, and how we can make it a matter for our own conscious exercise, are the urgent questions of our day.
Goto table of contents
[Phil Agre, who runs the distinguished Red Rock Eater News Service (RRE), picked up and redistributed NF #29, with part 2 of the privacy series. He prefaced the newsletter with the following criticisms. SLT]
I remain distressed by the (apparently) rapidly spreading notion of radical transparency -- not just predicting a world in which all privacy has been lost due to technological innovation, but actively embracing such a world. Several people have written to me with defenses of the idea. Some of these arguments are -- I'm sorry -- wanton sophistry. Others, however, are not. Steve Talbott's essay below is not a defense of radical transparency -- at least that's not how I interpret it. I disagree pretty thoroughly with it, but at least he is putting privacy issues in the context of real communities and the values of democracy. I have several complaints. First, it seems to me that he confuses privacy with secrecy. These are different ideas. Secrecy is the condition in which someone refuses to disclose certain information to certain parties. Privacy is the condition under which an individual has the power to choose what information to disclose to whom. When privacy is equated with secrecy, it begins to seem as though privacy corrodes social values. It is quite possible, however, to have a society with high levels of privacy and low levels of secrecy, depending on the choices that people make. Some of the proponents of radical transparency argue that people are inherently secretive, and that radical transparency would reverse some of the consequences of this failing. I vehemently disagree with this idea. It is understandable (if not necessarily proven empirically) if people who consume excessive amounts of xenophobia, violence, and lurid crime reporting on television may withdraw from involvement in the community. But if that is true then I cannot imagine that the underlying cultural problem will be solved by forcing those people to disclose personal information. For Steve, I gather, the problem does not arise in the first place. I am struck, however, by the unhappy implications of a confusion between privacy and secrecy. As soon as privacy is set up as the enemy of social cohesion, consequences follow that are, in my opinion, authoritarian. I see further problems as well. It is one thing to analyze the privacy issues that arise between individuals, or between individuals and small powerless institutions such as corner stores. But such analyses most certainly do not carry over to the more commonly debated -- and in my personal opinion more important -- privacy issues that arise between individuals and large powerful institutions such as data-intensive marketing organizations or the government. Finally, I believe that Steve is wrong to say that pseudonyms cannot be traced back to individuals. Digital pseudonym schemes can readily be designed to facilitate virtually any type of conditional traceability one wishes. It is a simple matter -- technically if not institutionally -- to establish "identity escrow" mechanisms that allow an individual's identity to be protected unless, let us say, someone produces a court order to release it. Once we release easy equations between privacy and secrecy, it becomes much easier to explore the vast middle ground within which the values of individual autonomy that lie behind concerns for privacy can be reconciled with the values of social solidarity and social order that have often lain behind arguments against privacy.
Goto table of contents
I appreciate your re-posting part 2 of my privacy article--and doubly appreciate your taking the time to offer some critical comments. I'll be interested to learn further about how you "disagree pretty thoroughly" with what I have to say. Personally, I've thought that your work on the technical aspects of privacy is quite exceptional on the Net, and I've always been drawn to the fact that you leaven your discussion with an awareness of larger social issues. I'd like to think that we may not be quite as far apart as you surmise....
I do admit to being just a little disturbed that you introduced your criticisms of my piece by expressing your distress over a view that you (rightly) acknowledged I don't hold, and then interleaved criticisms of my ideas with criticisms of that other view. (I mean, of course, the view that we should all be "radically transparent.") A lot of room for confusion there! But at least you did make an explicit distinction between the two views, for which I am thankful.
More substantively, I certainly hope I'm not confusing privacy with secrecy, since that's pretty close to what I thought I was criticizing others for doing! Nor would I want to suggest that secrecy, as such, is "the enemy of social cohesion." But I do think certain balances are crucial. Here's how I might state my central point:
Healthy public and private spheres exist only by virtue of each other, in a complex and delicate balance. Where this balance is falling apart--and computer technology can be a contributing factor here--it is natural and almost unavoidable for people to begin focusing more on protection from each other, and on a conception of "privacy" that is something more like anonymity (= secrecy?).I find myself hoping that you might be able to sit within that context at least half comfortably, while no doubt adding your own valuable twist to things!
In the global information system we have little choice but to erect powerful mechanisms to protect data from the intrusions that might occur anywhere, anytime, anyhow. But we should remain vividly aware that these new mechanisms are a long way from the reality of a meaningful privacy. They may be necessary, but they also erode the critical public/private ecology unless we consciously counterbalance every such mechanism with a strengthening of the human exchange that occurs outside the mechanisms.
Just one other thing. It was not my point about "digital pseudonyms" that they could only be used one way. I was simply offering an example based on one use--the same use I took you to be describing when you wrote in "Beyond the Mirror World" (draft chapter, 1 June 1996):
Troubles arise immediately when organizational relationships are not confined to digital media. Customer service telephone numbers, for example, have typically required individuals to identify themselves, and telephone interactions based on pseudonyms will inevitably be clumsy. Customers who enjoy the record-keeping benefits of periodic statements (from credit cards, banks, the phone company, and now automatic toll-collection systems) will require some way of obtaining such statements without disclosing their name and mailing address.In any case, the particular definition of "digital pseudonyms" doesn't matter to my general point: so far as we transfer social functions to the world of data transactions, and all the more as these transactions become more anonymous, to that degree we've transferred social functions out of the sphere where the sort of public/private balance I referred to can readily be nurtured. (As to our dealings with large, powerful, and threatening organizations: the very existence of these organizations, to my mind, reflects the choices we've been making as consumers and employees in favor of impersonal transactions over human transactions. I'll have more to say about this in part 3 of my article.)
Incidentally, nice chapter on Mirror Worlds. I know of no better place to refer people who are looking into privacy issues than to your stuff. I hope you will keep up the work.
Goto table of contents
Copyright 1996 by The Nature Institute. You may redistribute this newsletter for noncommercial purposes. You may also redistribute individual articles in their entirety, provided the NetFuture url and this paragraph are attached.
NetFuture is supported by freely given reader contributions, and could not survive without them. For details and special offers, see http://netfuture.org/support.html .
Current and past issues of NetFuture are available on the Web:
To subscribe or unsubscribe to NetFuture:
http://netfuture.org/subscribe.html.Steve Talbott :: NetFuture #32 :: November 10, 1996
Goto table of contents