• Goto NetFuture main page
  •                                  NETFUTURE
                        Technology and Human Responsibility
    Issue #100     A Publication of The Nature Institute       January 6, 2000
              Editor:  Stephen L. Talbott (stevet@netfuture.org)
                      On the Web: http://www.netfuture.org/
         You may redistribute this newsletter for noncommercial purposes.
    NetFuture is a reader-supported publication.
    Editor's Note
    Quotes and Provocations
       Who Is on the Receiving End of the Genetic Engineer's Power?
       Of Gadgets and Real Needs
    The Trouble with Ubiquitous Technology Pushers (Part 1) (Stephen L. Talbott)
       or: Why We'd Be Better Off without the MIT Media Lab
       An Engineer's Lament (Gintas Jazbutis)
       What a Computer Can and Cannot Supply (Shyam Oberoi)
       Computing Should Relate to Life (Gail Campana)
    About this newsletter
                                  EDITOR'S NOTE
    I continue to be hopelessly behind on email -- and I mistakenly deleted
    one batch.  If you don't hear a reply to your inquiry in the next week or
    so, or are just wondering, feel free to re-send.
    Some developments relating to NetFuture:
    ** The Alliance for Childhood's draft statement on Technology Literacy
    ("Four Guiding Principles for Educators and Parents") published in NF #99
    has stirred some things up.  The New York Times Online ran a nice story
    on the statement, written by NetFuture reader Pamela Mendels.  Also, the
    Christian Science Monitor put out a feature story entitled, "In World of
    High Tech, Everyone Is an Island".  NetFuture reader Paul Van Slambrouck,
    the Monitor's Silicon Valley correspondent, authored the story, which
    cited both the Alliance and my own work at some length.
    For pointers to these articles, as well as to the statement on Technology
    Literacy and to responses from policy analyst Michael Goldhaber, columnist
    Gary Chapman and others, see the Alliance-related web pages I've set up:
    ** Also, NetFuture reader Dolores Brien expertly led me through an
    interview, which is now featured on the C. G. Jung website at
    http://www.cgjung.com/jpculture.html.  [Added later: ural is now
    http://natureinstitute.org/txt/st/jung.htm.]  I was very pleased with the
    result, which summarizes some of my primary concerns about our culture's
    abdication of consciousness in the presence of digital technologies.
    In the next issue I'll mention some websites that are making use of
    NetFuture content in one way or another.  Let me hear from you if your
    site is reprinting, translating, or otherwise drawing upon NetFuture.
    Goto table of contents
                             QUOTES AND PROVOCATIONS
    Who Is on the Receiving End of the Genetic Engineer's Power?
    C. S. Lewis wrote in The Abolition of Man several decades ago that, in the
    battle for mastery over nature, "there neither is nor can be any simple
    increase of power on Man's side.  Each new power won by man is a power
    over man as well.  Each advance leaves him weaker as well as stronger.
    In every victory, besides being the general who triumphs, he is also the
    prisoner who follows the triumphal car."
    This simple and disturbing truth has yet to find its proper place in the
    public dialog about genetic engineering, which has focused mostly on the
    (very real) likelihood of accident and miscalculation.  But Lewis' concern
    is the more fundamental and inescapable one.  In his own blunt terms,
       If any one age really attains, by eugenics and scientific education,
       the power to make its descendents what it pleases, all men who live
       after it are the patients of that power.  They are weaker, not
    I don't think you can say, however, that one person's power over another
    is inherently evil.  A mother has power over her child, an airline pilot
    has power over the passengers, and anyone who cares about community knows
    that we are all dependent upon each other.  Some form of power is
    necessary if we are to act in the world at all.
    Everything depends, then, on the values with which we exercise our power.
    The traditional wisdom has it that the only healthy power is self-
    abnegating power, devoted to the service of others.  This is power turned
    inside out and transformed into love.  Those who are acted on by such
    power are not made weaker, but stronger.
    The cultural streams from which this wisdom has flowed are alien, not
    necessarily to the genetic engineers themselves, but certainly to the
    modern discipline of genetic engineering, founded as it is upon the habit
    of viewing the organism as a machine.  The patients of the discipline can
    therefore expect to be treated like machines -- and more and more made
    into machines.
    Machines don't suffer, of course, and we do hear much talk of deliverance
    from suffering.  But if you listen carefully you will notice that the
    suffering to be eradicated belongs more to the wielders of power than to
    its patients.  The parent cannot bear the thought of a "deformed" child,
    nor can the engineer tolerate standing by helplessly.  As most "deformed",
    "terminally ill", and "catastrophically suffering" individuals can tell
    you, it is above all the well-off who cannot face suffering.  They are the
    ones who most readily forget that the truest aim of life on earth is not
    merely to be rid of suffering, but to redeem it, to bear its fruits, to
    escape it by achieving whatever it is that our own life is most forcefully
    asking of us.
    That there is, in any significant sense, a life to do this asking is, of
    course, a premise scarcely informing the apparatus of genetic engineering
    as we have it today.  For the asking requires that there be an antecedent
    whole -- a being -- presupposed by all the "mechanisms" of our physical
    organism.  Needless to say, this kind of language is anathema in the
    engineer's laboratory.  I'm reminded of the world-famous artificial
    intelligence researcher at MIT who, taking sarcastic issue with my use of
    the word "human" in The Future Does Not Compute, wrote:
       Can I presume he's in favor of religion or something?  Isn't that the
       most usual "human" solution to problems?
    I think it's fair to say that those who are most eager to take effective
    charge of human destiny are also those most reluctant to glimpse any
    coherent or respectable answer to the question, "What does it mean to be
    human?"  So their enterprise unavoidably becomes arbitrary, which
    explains, among other things, their casual attitude toward gene transfers.
    It is not that we must refuse to change, develop, evolve.  The power to
    transform ourselves is close to our essence.  But there is all the
    difference in the world between mere arbitrary change and change that
    proceeds according to the inner necessities of our own being.  And there
    is all the difference in the world between the diabolical power that
    imposes change upon others arbitrarily and from without (because it
    recognizes no inner being worth consulting), and the power of love that
    both recognizes and serves the other.
    Which sort of power drives the world's genetic engineering laboratories?
    Don't take my word for it.  Why not ask the would-be engineers of
    humanity, before handing them the keys of power, "What does it mean to you
    to be human?"  We should consider their answers carefully, for these
    answers will tell us, directly or indirectly, the significance of the
    engineering project.  And if their answer is that it doesn't mean much of
    anything at all to be human, well then, what does it matter whether or not
    we suffer their messianic interventions?
    Leon R. Kass cites the Lewis passage in his valuable article on "The Moral
    Meaning of Genetic Technology" in Commentary (September, 1999).
    Related articles:
    ** "What Does It Mean to Be a Sloth?" by Craig Holdrege in NF #97.  How
       can one begin to think about the distinctive character, or being,
       of an organism?  What is the antecendent unity that guarantees,
       rather than results from, the various "mechanisms" constituting the
       organism?  Of course, the person who is determined to see nothing
       will see nothing.  But this article can help those who are willing
       to see begin to do some disciplined looking.
    ** "Is Genetic Engineering Natural?" in NF #75.  Are the genetic engineers
       doing nothing more than we've always done with our various breeding
       techniques?  Also see the follow-up article, "Loosing Genetic Restraints",
       in NF #77.
    ** For other articles, see "Genetic engineering" in NetFuture's topical index.
    Of Gadgets and Real Needs
    In the ongoing discussion of ubiquitous computing (NF #94, #95, #98),
    Langdon Winner has noted "the utter disconnect between the supposed wants
    and needs studied in places like the MIT Media Lab and ones painfully
    evident in the human community at large" (NF #95).  He invites us to
    compare the needs summarized in the yearly United Nations Human
    Development Report with the ones that galvanize those "high-tech wizards
    in Cambridge, Palo Alto and elsewhere" who worry that toilets, toasters,
    and doors aren't intelligent enough.
    Alan Wexelblat, who is one of the Cambridge high-tech wizards (he works in
    the Intelligent Agents Group of the MIT Media Lab) has rejected Winner's
    invitation because
       A response of the form "well xxx does nothing to address deep world
       problems such as hunger, overpopulation, etc." is unanswerable.  It's
       true, but completely irrelevant to the discussion.  (NF #98)
    Completely irrelevant?  This is a stunning claim.  Certainly it's
    true that no one subjects every daily activity to a minute "worthiness
    analysis", checking it against the great global challenges of the day.
    But surely it is just as true that, in those reflective moments when we do
    assess our own activities, we want them to prove worthy.  We want them to
    weigh positively against the most pressing needs around us.  We may
    healthily avoid becoming compulsive or too narrowly focused on questions
    of value, but Wexelblat seems to be saying that we should dismiss these
    questions even from our more serious reflection.  And such reflection, of
    course, is what NetFuture is about.
    It's hard to believe that Wexelblat would hold consistently to his claim
    that "deep world problems" are irrelevant to the assessment of our
    technological investments.  And, in fact, he goes on to say that Winner's
       invites people to put themselves in the position of claiming they don't
       care about such problems.  Rather like folk in Congress making speeches
       about child pornography -- no one is going to stand up and defend the
       exploitation of children, but it misses the point that free speech
       rights are at issue.
    Here he implies that we should indeed care about world problems of the
    sort Winner mentions; it's just that other issues take priority.
    Unfortunately, he doesn't say what these other issues are.  And if he
    should ever do so, wouldn't that be to identify them as the more pressing
    issues?  But it was exactly the appeal to more pressing issues that
    Wexelblat disparaged in Winner:
       I would venture to guess that well over 90% of what goes on in the
       world at large fails to focus on deep significant problems.  If that's
       what's on your mind, then you and I are having such vastly different
       conversations that it's not worth continuing.
    So I'm not sure what to make of Wexelblat's position.  But I also feel
    considerable sympathy for his frustration.  I honestly don't think that I
    or many other technology critics have formulated a fully adequate response
    to the enthusiasms and visions of progress that excite the legions of
    engineers working at places like the Media Lab.  The questions are not
    easy ones.  Did you notice that none of the letter writers responding to
    Wexelblat took up his comment about air bags having sensors to detect
    whether a child is sitting in the seat?  And there is, I think, an almost
    unshakable conviction among many computer engineers that all criticisms of
    this or that silly device simply aren't relevant.  The skills and tools we
    develop today will very likely find their full justification later; that's
    the nature of progress.
    Unfortunately, this is rarely a considered judgment.  For all the reams of
    news copy the Media Lab generates each year, it's hard to find in this
    prolific output any coherent attempt to place the lab's inventive thrust
    in a context where its social value could be assessed.  But, of course,
    this doesn't hinder the rest of us.  In this issue I begin a series of
    brief essays that are my effort to flesh out a context for evaluating the
    various claims about ubiquitous computing.
    Goto table of contents
               or: Why We'd Be Better Off without the MIT Media Lab
                                Stephen L. Talbott
    The idea has seized our imaginations with all the force of a logical
    necessity.  In fact, you could almost say that the idea is the idea of
    logical necessity -- the necessity of embedding little bits of silicon logic
    in everything around us.  What was once the feverish dream of spooks and
    spies -- to plant a "bug" in every object -- has been enlarged and re-shaped
    into the millennial dream of ubiquitous computing.  In this new dream,
    of course, the idea of a bug in every object carries various unpleasant
    overtones.  But there are also overtones in the larger and better-promoted
    notion of ubiquitous computing, despite the fact that our ears are not
    yet attuned to them.
    Why Not Omnipotence?
    I suppose Bill Gates' networked house is the reigning emblem of ubiquitous
    computing.  When the door knows who is entering the room and communicates
    this information to the multimedia system, the background music and the
    images on the walls can be adjusted to suit the visitor's tastes.  When
    the car and garage talk to each other, the garage door can open
    automatically whenever the car approaches.
    Once your mind gets to playing with such scenarios -- and there are plenty
    of people of good will at places like the MIT Media Lab and Xerox PARC who
    are playing very seriously with them -- the unlimited possibilities crowd
    in upon you, spawning visions of a future where all things stand ready to
    serve our omnipotence.  Refrigerators that tell the grocery shopper what
    is in short supply, shopping carts that communicate with products on the
    shelves, toilets that assay their clients' health, clothes that network
    us, kitchen shelves that make omelets, smart cards that record all our
    medical data, cars that know where they're going -- clearly we can proceed
    down this road as far and fast as we wish.
    And why shouldn't we move quickly?  Why shouldn't we welcome innovation
    and technical progress without hesitation?  I have done enough computer
    programming to recognize the inwardly compelling force of the knowledge
    that I can give myself crisp new capabilities.  It is hard to prefer not
    having a particular capability, whatever it might be, over having it.
    Moreover, I'm convinced that to say "we should not have technical
    capability X" is a dead-end argument.  It's the kind of argument that
    makes the proponents of ubiquitous computing conclude, with some
    justification, that you are simply against progress.  You can only finally
    assess a tool in its context of use, so that to pronounce the tool
    intrinsically undesirable would require an assessment of every currently
    possible or conceivable context.  You just can't do it -- and if you try,
    you underestimate the fertile, unpredictable winds of human creativity.
    But this cuts both ways.  You also cannot pronounce a tool desirable (or
    worth the investment of substantial resources) apart from a context of
    desirability.  Things are desirable only insofar as a matrix of needs,
    capacities, yearnings, practical constraints, and wise judgments confirms
    them.  This leads me to my first complaint against the ubiquitous
    technology pushers.
    Asking the Wrong Questions
    When we are asked to accept or reject a particular bit of technology --
    and, more broadly, when we are asked to embrace or condemn ubiquitous
    computing as a defining feature of the coming century -- we should flatly
    refuse the invitation.  Technologies as such are the wrong kinds of things
    to embrace or condemn.  To focus our judgments on them is to mistake what
    is empty for something of value.
    Take, for example, the questions we face in the classroom.  They are
    educational questions.  They have to do, in the first place, with the
    nature, destiny, and capacities of the child.  Such questions are always
    deeply contextual.  They arise from a consideration of this child in this
    family in this community, against the backdrop of this culture and this
    physical environment.
    It's one thing if, deeply immersed in this educational context, pursuing
    the child's education, we come up against a gap, a shortfall, a felt need,
    and if, casting about for a solution, we conclude: The computer might
    offer the best way to fulfill this particular need.  But it's quite
    another thing to begin by assuming that the computer is important for
    education and then to ask the backward and destructive question, "How can
    we use the computer in the classroom?"  This is to deprive our inquiry of
    its educational focus and to invite the reduction of educational questions
    to merely technical ones -- a type of reduction that is the reigning
    temptation of our age.  It leads us, for example, to reconceive learning
    as information transfer -- fact shoveling.
    Spurred by this backward thinking, we've felt compelled to spend billions
    of dollars wiring schools, retraining (or dismissing) teachers, hiring
    support staff, buying and updating software, rewriting job descriptions,
    and designing a new curriculum.  Then Secretary of Education Richard Riley
    comes along after the fact and says, Oh, by the way,
       We have a great responsibility .... We must show that [all this
       expenditure] really makes a difference in the classroom.  (Education
       Week on the Web, May 14, 1997, via Edupage)
    The same concerns arise in the workplace.  Why do we work?  Surely it is,
    in the first place, in order to discover and carry out our human vocations
    and to achieve something of value for society.  What shape this productive
    effort might take -- and what tools might be embraced healthily -- can
    follow only from the most profound assessment of the needs and capacities
    of both the individual and society.
    Yet such assessment is increasingly forgotten as social "progress" and
    vocational decisions are handed to us by automatic, technology-driven
    processes.  It is no accident that we see today a growing consensus among
    entrepreneurs that all considerations of human value should be jettisoned
    from the business enterprise as such.  Seek first the Kingdom of
    Profitability, we are advised -- that is, seek what can be perfectly
    calculated by a machine -- and all else will somehow be added to you.
    Here again is the reduction of real questions to one-dimensional,
    abstract, decontextualized, technical ones.  The availability of the
    precise, computational techniques of accounting have encouraged us toward
    a crazy reversal, whereby the healthy discipline of profitability no
    longer serves us in work that we independently choose as worthy and
    fulfilling, but rather we choose our work according to its profitability.
    It is always easier to make our choices according to rules that can be
    clearcut, precise, and automatic -- the kind of rules that can be embedded
    in ubiquitous silicon -- than to ask what sort of human beings we want to
    become.  We can answer the latter question only through our own struggling
    and suffering -- that is, only by embedding ourselves in real-world
    Is Technology Really Context-free?
    So my first complaint is this:  the most visible pronouncements in favor
    of ubiquitous computing take the form of huge investments in places like
    the MIT Media Lab where the whole aim is to pursue new technologies out of
    context, as if they were inherently desirable.  This mistaking of mere
    technical capacity for what really matters is the one thing guaranteed to
    make the new inventions undesirable.
    The healthy way to proceed would be to concern ourselves with this or that
    activity in its fullest context -- and then, in the midst of the activity,
    ask ourselves how its meaning might be deepened, its purpose more
    satisfyingly fulfilled.  Only in that meditation can we begin to sense
    which technologies might be introduced in appropriate ways and which would
    be harmful.
    If the researchers at the Media Lab pursue their work via such immersion
    in problem contexts -- that is, by exploring significant questions as a
    basis for seeking answers -- they've done a miserable job of communicating
    the fact to the rest of us.  What we actually receive from them (via the
    news media) is a steady stream of exclamations about the wonders of this
    or that technical capability.  Typical, so far as I can tell, is the fact,
    reported in the New York Times, that one of the Media Lab staffers most
    concerned to render kitchen appliances intelligent is "a bachelor who
    rarely uses his kitchen".  Is it such people who will point us toward the
    realization of the kitchen's highest and most humane potentials?
    A technology-focused consciousness -- and you could fairly say that our
    society is becoming obsessively technology-focused -- is a consciousness
    always verging upon emptiness.  It is a consciousness whose problems are
    purely formal or technical, with precisely definable solutions.  They can
    be precisely defined because they lack context, they have no significance
    of their own.
    Now, it needs adding that no technology perfectly achieves this "ideal" of
    emptiness and self-containment.  As I have pointed out before, a complex
    device like the computer evolves historically and has numerous tendencies
    of ours, numerous habits and contexts of use, built into it.  This is why
    you can never say say that such devices are neutral in their implications
    for society.
    And, of course, its non-neutrality is what enables us to assess a
    technology:  does it fit into and serve this particular context or not?
    So when I speak of "technology as such", I am to some degree falsifying
    things.  But the point is that this is the very falsification the
    ubiquitous technology pushers are encouraging through their strongly
    decontextualized celebration of ... technology as such.
    This makes a certain self-deception easy, whereby new technical capacities
    are much too quickly assumed to represent the answers to problems.  And it
    diverts massive social resources into the production of technologies that,
    because they will be injected into real contexts with alien force, will
    certainly prove socially destructive.
    That, in fact, will be the argument of the next installment.  More widely,
    the remaining parts of this essay will deal with issues such as these:
       When the ubiquitous technology pushers do claim social benefit for
       their inventions, how well-grounded are those claims?
       Why do time- and labor-saving devices leave us more pressured and
       What are we really aiming for when we try to embed intelligence in all
       the objects around us?
    Related articles:
    ** Go to part 2 of this article.
    ** Go to part 3 of this article.
    ** "The Fascination with Ubiquitous Control" in NF #95.  A kind of
       preparatory article for this current series.
    ** "How Technology Co-opted the Good (Parts 1 and 2)" in NF #64 and #65.
       This is a review of Albert Borgmann's crucially important book, Technology
       and the Character of Contemporary Life.  Borgmann's idea of a "focal
       practice" captures beautifully what I have merely hinted at here:  how
       meaningful contexts can guide our thinking about technology.  The second
       part of the review is accessible from the first part.
    Goto table of contents
    An Engineer's Lament
    From:  Gintas Jazbutis (gintas.jazbutis@sdrc.com)
    I often wonder how I could get a PhD in engineering from Georgia Tech
    while not once having to think about the history and philosophy of
    technology and its impact on society.  There are lots of us out here just
    churning away on new technology without once thinking about its
    implications.  We do because we can -- scary...
    Gintas Jazbutis
    Metaphase Deployment Competence Center
    Bellevue, WA
    What a Computer Can and Cannot Supply
    Response to:  "We'll Get What We Choose" (NF-98)
    From:  Shyam Oberoi (shyam.oberoi@thortech.com)
    It's no exaggeration, I think, to suggest that in order for the great
    dream of "ubiquitous computing" to be fully realized (that is, for it
    to be, in the true sense of the word, ubiquitous), one must first agree
    that every aspect of a human existence must be able to be reduced to a
    series of repeatable tasks and transactions.  Certainly, Mr. Leavitt's
    wish-list would seem to suggest that his life is precisely that.
    Should anyone be surprised, then, when an existence is defined primarily
    through the purchase and consumption of goods, that that person would
    consider it essential to his well-being that those goods be more readily
    available and more speedily delivered?
    Curiously absent from this list is any sort of longing for those
    experiences that the computer cannot easily supply.  Does Mr. Leavitt want
    to play the piano?  Read a novel?  Go to a museum?  He will probably argue
    that he hasn't the time anymore (if he ever had) for these quaint
    pastimes, but he would do well to remember Goethe:  "one ought, every day
    at least, to hear a little song, read a good poem, see a fine picture,
    and, if it were possible, to speak a few reasonable words."  And, while we
    are certainly drawing close to that happy day when computers will compose
    symphonies and write sonnets indistinguishable (to most) from the former,
    old-fashioned productions of their human counterparts, we should not
    forget that the pleasure that some of us still take in Goethe's arts has
    nothing to do with the ease with which they can be transmitted or
    experienced.  If anything, the very opposite:  we will seek out difficult
    works of art for that obdurate difficulty, for that complexity that cannot
    be easily assimilated, for that ineffable experience in the very physical
    and mental act of reading a book or playing a piano that cannot (and
    probably will not ever) be replicated or facilitated by a computer.
    Computing Should Relate to Life
    Response to:  "We'll Get What We Choose" (NF-98)
    From:  Gail Campana (Gail.Campana@ht.msu.edu)
    In Thomas Leavitt's reply to you, I was struck by the fact that most of
    his support for computer technology revolves around buying things; with
    only a couple of exceptions, his examples were about spending money and
    "consuming."  I don't see much in his reply that has anything to do with
    living one's life in a better or more humane way.  I do not see anything
    which shows how technology will do anything to make us wiser, gentler,
    more peaceful or more happy.  I do see that technology is about managing
    money in our vaunted "free-trade" neo-liberal world where the making of
    money at the expense of most of the world's population and environment has
    been raised to the status once held by worship of a supreme being.
    You were also taken to task by a respondent who said that most things do
    not have to do with world problems, such as reducing hunger, and as such,
    you were simply not discussing technology in any meaningful way.  Hence,
    no discussion about technology with you was really possible.  That is
    telling.  Real world problems are, after all, the context in which
    computing happens.  I do not want to underestimate the ways in which
    "technology" has made life easier and healthier than it was 500 years ago.
    For those of us in the US, particularly.  But the lack of any coherent
    discussion about life, its meaning and what we should be doing as humans
    to fulfill that meaning, continues to be a problem for "advanced"
    technological societies which must grapple with wholesale emotional,
    psychological and spiritual emptiness.  Computing, as far as I can tell,
    exacerbates this for many people.....and heightens the separations which
    isolate so many.  Because as noted in the first paragraph, computing is
    increasingly about managing material superfluity.
    Goto table of contents
                              ABOUT THIS NEWSLETTER
    Copyright 2000 by The Nature Institute.  You may redistribute this
    newsletter for noncommercial purposes.  You may also redistribute
    individual articles in their entirety, provided the NetFuture url and this
    paragraph are attached.
    NetFuture is supported by freely given reader contributions, and could not
    survive without them.  For details and special offers, see
    http://netfuture.org/support.html .
    Current and past issues of NetFuture are available on the Web:
    To subscribe or unsubscribe to NetFuture:
    Steve Talbott :: NetFuture #100 :: January 6, 2000
    Goto table of contents

  • Goto NetFuture main page