• Goto NetFuture main page
  •                                  NETFUTURE
                        Technology and Human Responsibility
    Issue #138                                                November 7, 2002
                     A Publication of The Nature Institute
               Editor:  Stephen L. Talbott (stevet@netfuture.org)
                      On the Web: http://www.netfuture.org/
         You may redistribute this newsletter for noncommercial purposes.
    Can we take responsibility for technology, or must we sleepwalk
    in submission to its inevitabilities?  NetFuture is a voice for
    responsibility.  It depends on the generosity of those who support
    its goals.  To make a contribution, click here.
    Quotes and Provocations
       High-Profile Doubts about Classroom Computers
       Ellen Ullman on Artificial Unintelligibility
    Mindlessness and the Brain (Stephen L. Talbott)
       Have you been nice to your two hemispheres today?
       Technology Does Not Make Us More Vulnerable (Michael Goldhaber)
       Response to Goldhaber (Langdon Winner)
    About this newsletter
                             QUOTES AND PROVOCATIONS
    High-Profile Doubts about Classroom Computers
    That eminent news weekly, the Economist (Oct. 26, 2002), has now
    pronounced editorially -- and emphatically -- on the billions of dollars
    spent in order to "clutter classrooms with terminals and keyboards".
    These billions, the editors note, were "spent on a hunch", and the hunch
    turns out to have no apparent justification.  In an accompanying story
    about a new study in Israel (which purports to show, among other things,
    "a consistently negative relation" between computer use and math test
    scores for fourth graders), the magazine writes:
       Back in 1922, Thomas Edison predicted that "the motion picture is
       destined to revolutionize our educational system and ... in a few years
       it will supplant largely, if not entirely, the use of textbooks".
       Well, we all make mistakes.  But at least Edison did not squander vast
       quantities of public money on installing cinema screens in schools
       around the country.
    Actually, I wouldn't trust that Israeli study any further than I trust all
    the other studies supposedly elucidating the role of the computer in the
    classroom.  Social science research, in general, remains primitive when it
    is undisciplined hackwork, and even more primitive when it tries to ape
    the "hard" sciences.  And, in any case, the proponents of wired classrooms
    will hardly be fazed by the latest report.  They will doubtless respond
    (with some reason) that, in order to get a really good education out of a
    computer, you can't just drop the machine into a conventional classroom.
    The computer imposes its own distinctive requirements upon the entire
    educational context, so that everything needs to change.  This
    takes time.
    But if this is true, then what in the world were we thinking when we
    immediately decided that, given the availability of the networked PC, it
    was urgently required of us to re-found education upon program logic, web
    access, and email?  Wouldn't the natural thing have been to look into that
    "total change" a little bit before committing huge resources to the
    computerization of education?
    Ellen Ullman on Artificial Unintelligibility
    A few notes from my reading of Ellen Ullman's "Programming the Post-Human"
    in the October issue of Harper's Magazine:
    ** Struck by how radically computers have infected the natural sciences,
    Ullman quotes Lucia Jacobs, a Berkeley professor of psychology studying
    squirrel behavior:
       I am an ethologist and know virtually nothing about computers,
       simulations, programming, mathematical concepts, or logic.  But the
       research is pulling me right into the middle of it.
    Jacobs is still interested in squirrels, but today she also works with
    robots.  "It is now standard scientific practice", writes Ullman, "to
    study machine simulations as if they were indeed chipmunks, or squirrels
    .... Psychology and cognitive science -- and indeed biology -- are thus
    poised to become, in essence, branches of cybernetics".
    ** The faith buttressing this work is what Ullman refers to as
    "engineering empiricism".  It is the conviction, long prevalent within
    artificial intelligence, that we should abandon "sterile philosophizing"
    and just get down to the practical business of building minds.  "You don't
    have to understand thought to make a mind", as computer scientist Douglas
    Hofstadter puts it.  And Rodney Brooks, director of MIT's artificial
    intelligence laboratory, told Ullman, "The definition of life is hard.
    You could spend five hundred years thinking about it or spend a few years
    doing it".  Breathtaking statements.  What we see here, in Ullman's apt
    phrase, is "anti-intellectualism in search of the intellect" -- a mindless
    search if ever there was one.
    It rarely seems to occur to researchers like Hofstadter and Brooks that
    our powers of making must be entirely distinct from any of the
    things we have made or could make, or that a little intelligent
    understanding might reveal gross misconceptions at the heart of the entire
    AI project.  Moreover, the irony in their anti-intellectualism, as Ullman
    brings out, is that years of over-expectation and disappointment have now
    driven researchers back to the deeper questions -- for example, What is
    consciousness?  Pick up many journals of cognitive science and you will
    find endless discourses on this question -- discourses, incidentally,
    whose philosophical naivete and conceptual inadequacy will make you yearn
    for the good old days of concise medieval hairsplitting.
    ** Ullman's primary concern in the essay is to combat the disembodied view
    of life and intelligence that takes hold wherever the computer model
    prevails.  The logical extreme, she observes, is "pure software, unsullied
    by exigencies of carbon atoms, bodies, fuel, gravity, heat, or any other
    messy concern of either soft-tissued or metal-bodied creatures".
    But, she counters, we need more than abstraction.  The organism's
       is integral to the substrate from which it arose, not something that
       can be taken off and placed elsewhere.  We drag along inside us the
       brains of reptiles, the tails of tadpoles, the DNA of fungi and mice;
       our cells are permuted paramecia; our salty blood is what's left of our
       birth in the sea.  Genetically, we are barely more than roundworms.
       Evolution, that sloppy programmer, has seen fit to create us as a wild
       amalgam of everything that came before us:  the whole history of life
       on Earth lives on, written in our bodies.  And who is to say which
       piece of this history can be excised, separated, deemed "useless" as an
       essential part of our nature and being?
    It is surely important to recognize the importance of embodied existence
    for our mental life.  But there is a problem here that Ullman does not
    address.  The abstract, softwarish view of mentality is the conclusion and
    crown of a long development in science and technology.  This development
    is now presenting us with a strange outcome:  having focused single-
    mindedly upon what they think of as the "solid, physical world",
    scientists have found themselves sacrificing, not only mentality, but even
    materiality itself.  That is, "hard" science has proven itself unable, in
    the last analysis, to reckon with anything but numbers, algorithms, and
    other airy abstractions.
    So it does Ullman little good to appeal to "the whole history of life on
    earth ... written in our bodies" -- not if, when we turn to examine these
    bodies, we find ourselves driven to reconceive them as manufactured from
    the disembodied equations of particle physics, or as machines governed
    computationally by genetic and evolutionary algorithms.  We are right back
    with all those software abstractions.  Ullman herself buys into this when
    she refers to evolution as a "programmer".
    If, as some physicists would have it, "its are bits", then there's not
    much use in saying (in the words Ullman borrows from linguists George
    Lakoff and Mark Johnson), "the very structure of reason itself comes from
    the details of our embodiment".  This is merely to say that "these bits
    come from those bits".  The body behind all this talk has disappeared,
    except as a kind of virtual machine capable of running software.  Reified
    algorithms take the place of real bodies.
    ** When you buy into the general drift of modern thought, as Ullman seems
    to do, the results are not always pretty.  For example, she quarrels with
    those who posit a disembodied essence of life, as if
       these messy experiences of alimentation and birth, these deepest
       biological imperatives -- stay alive, eat, create others who will stay
       alive -- were not the foundation, indeed the source, of intelligence;
       as if intelligence were not simply one of the many strategies that
       evolved to serve the striving for life.  If sentience doesn't come from
       the body's desire to live ... where else would it come from?
    But what good does it do to explain intelligence as a strategy that
    evolved to serve the striving for life, when the idea of striving already
    includes intelligence?  What good does it do to explain sentience as
    coming from "the body's desire to live", when desire is itself a form of
    sentience?  This is simply to posit the thing you are trying to explain.
    You will find a similar explanatory strategy endemic in the literature of
    cognitive science, wherever the attempt is made to explain mind in terms
    of what is conceived as wholly other than mind.  One way or another, the
    explanations smuggle in the very thing they are trying to explain, often
    in the guise of such terms as "information", "message", "code", "program",
    "algorithm", "tendency", and "pattern" -- none of which can be defined
    without assuming an unexplained thought-element.  How could it be
    otherwise when you make it your task to get mind from something that is
    supposed to be wholly devoid of mind?
    ** It is her failure to overcome this confused state of affairs, I think,
    that leads Ullman to the disappointingly weak conclusion of her essay.  In
    trying to find a positive basis for understanding the mind, she offers a
    few thoughts about how mammals form societies based on mutual recognition,
    which she takes to be the crucial thing.  (She quotes Rodney Brooks as
    saying, "None of our robots can recognize their own kind".)  And so:
       Uniqueness, individuality, specialness, is inherent to our strategy for
       living.  It's not just a trick:  there really is someone different in
    Yes, someone unique is "in there", and this is doubtless central.  But I
    don't see how Ullman has made a case in any way that clearly separates her
    view from that of the AI researchers she is criticizing.  In fact, she
    finally just throws in the towel:  human sentience is "too complex to
    understand fully by rational means, something we observe, marvel at, fear.
    In the end, we give up and call it an 'act of God'".
    That's the conclusion, I think, of someone who is standing outside her own
    consciousness, trying to understand it as an objective thing rather than
    observe her own activity of understanding from within it.  But despite its
    shortcomings on the positive side, Ellen Ullman's worthy article frames a
    vital set of issues, and is well worth checking out.
    (Thanks to Thomas Tommi for bringing Ullman's article to my attention.)
    Goto table of contents
                            MINDLESSNESS AND THE BRAIN
                                Stephen L. Talbott
    You have by now most likely read dozens of science news stories playing on
    the fact that researchers can watch areas of the brain "light up" as test
    subjects perform various activities.  What the lighting up of a particular
    area means, stated more or less exhaustively, appears to be:  "something's
    going on there".  But in this field things are happening so fast that
    excited researchers can't afford to be slowed down by mere hopeless
    ignorance.  One almost suspects a psychedelic element must reside in those
    glowing, multicolored, instrument-produced images of cerebral tissue,
    since most of the news reports carry the same howling absurdities.
    Here's a typical example.  The New York Times (June 22, 1999)
    reported on research thought to confirm "a theory that the fear of pain is
    worse than the pain itself".  The confirmation lay in the fact that
    particular areas of the brain lit up during the anticipation of pain, and
    these were mostly different from, though situated close to, areas that
    usually light up during the actual experience of pain.  That's just about
    the entire substance of the story, which begins this way:
       It is a common reaction:  fear of the dentist's drill.  Now scientists
       say the feeling is not only real, but they can show just what happens
       in the brain to cause it.
    And to think that all this time I mistakenly thought my fear was caused by
    what the dentist's drill was about to do to me!  I guess I should really
    have been fearing that ominous glow in my brain.  At least I can take
    comfort in the researchers' conclusion that my fear is "real", although
    it's too bad they didn't give me, for comparison, an example of a fear
    that is not real.  It would have been nice to know which fears of
    mine weren't really there.
    It's difficult to decipher what this article (like most others of the same
    ilk) is actually trying to tell us.  But of one thing you can be sure: the
    chief scientist on the case hopes, as the article tells us, "to use this
    research to help people with chronic pain".  According to the prevailing
    canons of journalism, every science story needs such a warm and fuzzy
    benediction, suggesting how the human estate may benefit from the work.
    This is decidedly not an equal opportunity affair, however; you rarely see
    such routine statements about the risks of such research.
    On Reforming Gray Matter
    For all I know, the brain investigations may indeed lead to chemical or
    other interventions that "work" in one way or another.  But progress
    toward this end will not be aided by the acute conceptual confusions
    plaguing this kind of research.  And the human pain resulting from these
    confusions may dwarf anything experienced in the dentist's chair.
    Given a vague grasp of the fact that "we are psychosomatic organisms",
    many people -- scientists among them -- seem content to flop blithely back
    and forth between a brain vocabulary and a mental vocabulary as if there
    were no distinction between the two.  What makes this an inexcusable lack
    of discipline is the simple fact that, as these vocabularies now exist, no
    one has the slightest idea how to translate a single term of the one
    language into a term of the other.
    It's certainly true that we can correlate elements of brain physiology
    with elements of mentality.  But this fact is fully consistent with
    opposite extremes of interpretation -- consistent, that is, with the
    idea that our thinking and other mental activities somehow "arise"
    as effects of brain matter, and also with the idea that thinking
    constructs and employs the brain for its own manifestation.  However,
    which alternative we prefer is irrelevant to the point I'm making, which
    is that the sloppy shifting between different vocabularies results in
    the most shameless nonsense.
    What, after all, are we to make of references to the brain as if it were
    the stuff of mind?  Should we try to "reform" those brain tissues that
    light up when we don't want them to, perhaps admonishing them or
    administering a slap to some recalcitrant gray matter?  When researchers
    say they've found in the brain the "cause" of our fear of dentists, should
    we work to remove the cause by altering the physiological conditions
    responsible for the glow?  To be a little more topical, are we to remedy
    the "cause" of our fear of terrorists by tweaking our brains, or do we
    need to look elsewhere?
    Locating Consciousness
    We are being swamped by this illuminated-brain craziness.  A New York
    Times headline (Sep. 25, 2001) has researchers "Watching How the Brain
    Works as it Weighs a Moral Dilemma", while a science article in the
    Economist (May 25, 2002) talks about "how the brain actually makes
    decisions".  Presumably, if the brain is really doing the decision-making
    and moral weighing, then the buck stops there and all moral education
    should be in the form of chemical or surgical "instruction".
    As for another article in the Times, "Looking for That Brain Wave Called
    Love" (Oct. 28, 2000), you might think it's pure jest.  But, no, Rutgers
    anthropologist, Helen Fisher, has run madly infatuated test subjects
    through an MRI machine to record their brain activity.  Which could be a
    perfectly interesting thing to do, except that she seems to think she is
    investigating the nature of love.  In fact, she complains about the
    slowness of those who fail to see the value of her work:
       It's amazing how scientists don't regard depression or anxiety as a
       mystery but want to relegate romantic love to the realm of the
    I'm not sure how the supernatural gets in there, except as a cheap way to
    declare her own point of view sane and rational.  But if you want to
    consign love (or depression or anxiety) to a realm offering no hope of
    meaningful and non-mysterious understanding, I can't think of any better
    way than to equate it with physiology.  Respond to your advice-seeking,
    lovesick friends by explaining how they should interact with or modify
    appropriate brain tissues, and I guarantee you'll produce a great deal of
    Another article in the Economist (Sep. 21, 2002) tells us that
    "neuroscientists think they may have pinned down the source of out-of-the-
    body experiences".  The guilty party?  None other than the right angular
    gyrus -- and if you want to fix it, I should warn you that it is not,
    after all, near the pineal gland, but rather located above and slightly
    behind the right ear.  An odd place, perhaps, for out-of-the-body
    experiences to lurk, but if an experience has to hide out somewhere, I
    suppose that's as good a place as any.
    Harmonizing the Hemispheres
    As I was mulling over a pile of science stories like the ones mentioned
    above, I chanced upon a 1977 Owen Barfield essay.  It concerned the 1976
    Reith Lectures, entitled "Mechanics of the Mind", by neurophysiologist
    Colin Blakemore.  Back then brain hemisphere research was becoming
    popular, and so Blakemore discussed it in his lectures.  Picking up on
    this, Barfield began by noting that "if we know something about the
    physical structure of the brain, we can either make physical use of that
    knowledge (surgery, drugs, and so forth), or we can decide that
    another way of approaching our problem is more appropriate.  Let us call
    it the 'consciousness' way".
       Take the two hemispheres, for instance.  If a movement is set on foot
       for "liberating the right hemisphere", that is the imaginative, and
       relatively feminine, one (and according to the lecturer, there is such
       a movement), then the campaigners must mean by "liberation" one of two
       things -- either direct action on the brain itself, or indirect action by
       the ordinary means of agitation, argument, propaganda; by "the spread
       of ideas" in fact:  in which case no difference whatever is made by
       calling it "liberation of the right hemisphere", instead of something
       like freeing the imagination, or liberation of women.
    Barfield goes on to "doubt whether the lecturer is capable of grasping
    such an uncomfortably disjunctive proposition".  Which of the alternatives
    did Blakemore have in mind, for example, when he said,
       What we should be striving to achieve for ourselves and our brains is
       not the pampering of one hemisphere to the neglect of the other
       (whether right or left) or their independent development, but the
       marriage and harmony of the two.
    It's not that we should disavow either fork of Barfield's disjunctive
    proposition; both physical intervention and the attention of consciousness
    to its own contents have their place.  It's just that we should not
    confuse the two or refuse to be clear regarding which one we are talking
    about.  When Blakemore advocated the "marriage and harmony" of the two
    hemispheres, was he suggesting something like a surgical interweaving of
    tissues into a more artistically unified physical tapestry, or was he
    urging certain conscious disciplines?  Or was he, through lamentable
    vagueness, implying the equivalence of the two approaches despite the fact
    that subjecting yourself to a scalpel doesn't seem to be quite the same
    activity as, say, participating in the discussions of a gender sensitivity
    In the actual case, Blakemore disavowed the surgical approach as
    impossible -- and also as crude compared to direct, cultural influences
    upon consciousness.  And yet (as so often happens in these matters), he
    immediately flip-flopped.  As Barfield puts it, "Dr. Blakemore was not
    going to let a tedious bit of logical consequence stand in the way of his
    march towards a peroration".  So the scientist rose to the occasion by
    telling his audience:
       without a description of the brain, without an account of the forces
       that mould human behavior, there can never be a truly objective new
       ethic based on the needs and rights of man.
    So there we go again.  If we want an understanding of our needs and rights
    and the influences upon our behavior -- things we might once have related
    to our families and workplaces, our social institutions and personal
    experiences -- now we see that what we really needed all along was a good
    description of the brain, presumably so we can whip those tissues into
    No wonder Barfield gives way to near-despair:
       How much longer will it all go on?  For how much longer will educated
       men go on being allured by the ignis fatuus of a "consciousness"
       accessible to physical experiment and investigation?  How much longer
       will they go on spending untiring energy in pursuit of it?
    A Koan
    Well, we've now gone on for another quarter century -- and things have
    only gotten worse.  I will not bore you further with recitations from the
    many contemporary reports on brain research that are simultaneously passed
    off as reports on mentality.  But I do wish to cite Barfield's wonderfully
    concise statement of the root of the confusion:
       Perceiving, and every other mode of consciousness, is categorically
       other than being perceptible, and therefore [is not] accessible
       to a merely physical investigation.
    That is, we cannot understand perceiving -- the inner reality of
    perceiving -- in terms of the kinds of outer things given through the act
    of perceiving, such as brain tissues.  We cannot understand the act as the
    result of its own results.  We cannot understand as just another object
    the activity that constitutes things as objects.
    I will leave you with those puzzling remarks, hoping they might serve as a
    koan of sorts, worthy of some perplexed reflection.  I am fully aware that
    these statements will mostly provoke disbelieving resistance, if not
    outraged rebellion.  They can carry little force for anyone who is still
    struggling to reconcile the impossible Cartesian notion of mindless matter
    with the impossible modern notion of mindless mind -- a struggle that
    yields, as we have seen above, just plain mindlessness.
    The real need, I'm convinced, is for us to overcome the entire Cartesian,
    mind-matter dualism, between the pincers of which our culture has been
    trapped for the past few centuries.  "Overcome", I say, not "accept the
    original terms of the split and then claim to have overcome it by
    effectively denying just one of the two domains produced by the false
    cleavage" (which is the standard tack taken by those legions today who
    fervently disavow the "Cartesian dichotomy").  The idea of purely
    objective matter, uninformed by, and genetically unrelated to, the mind
    that perceives it is just as impossible as the idea of mind unrelated to
    the matter it informs.
    But we will make progress in all this only insofar as we begin to gain
    vivid experience, within consciousness, of our own activity in
    perceiving and thinking.  The effect of this will be rather like turning
    much of modern thinking inside out.  The exercise, however, is a long and
    difficult one.  I do hope to write about it before long.
    Meanwhile, we are left with a view that leaves no room for the human being
    as anything other than a machine among machines.  More alienation, pain,
    and suffering have flowed from this conviction than anyone could ever
    tally.  If you want to meliorate this pain, you may watch the brain
    light up until the cows come home, and you can attempt to comfort, bathe
    in drugs, or otherwise manipulate the cerebral tissues to your heart's
    content, but you will never by these means touch the actual problem.  It
    is a problem of consciousness, not a problem of the brain.  Despite the
    confused rhetoric coming at us from all sides today, they are not the same
    Goto table of contents
    Technology Does Not Make Societies More Vulnerable
    Response to:  "Technology, Trust and Terror" (NF #137)
    From:  Michael Goldhaber (mgoldh@well.com)
    I was frustrated by your piece "Complexity, Trust and Terror".  Normally I
    like what you have to say.  But this time, your main point that
    technological complexity leaves us particularly vulnerable -- say, to
    terrorism -- strikes me as cliched and mistaken.  While it is no doubt the
    case that the complexities of our society, technological and otherwise,
    present a great many problems ranging from global warming to lack of
    active political involvement, the truth is we are far less vulnerable,
    even to these worries, than less sophisticated societies.  Complex
    systems, among other things, tend to have great redundancy built in or
    simply lying around ready to be utilized if need be.
    History doesn't teach what you say.  The example of the Goths' attack on
    Rome's aqueducts in 537 misses the context.  After lasting for many
    centuries, and then undergoing centuries of decline, probably brought on
    by its inability to find a political form that could handle its size and
    diversity, the Western Roman Empire finally fell in 476.  Sixty years
    later, it had been briefly reconquered by the remaining Eastern Roman (or
    Byzantine) Empire, which then failed to hold it.  But by then Rome was far
    from being the technically sophisticated and advanced capital it had been
    centuries earlier.  The aqueducts had survived, but not the engineers who
    had built them.
    The post-World War II Strategic Bombing Survey of Germany revealed that,
    contrary to its intent, the unprecedented level of allied bombing had not
    significantly reduced Germany's output of war machinery and materiel.
    Compare how quickly the less technologically complex Taliban fell when
    subject last year to American bombing at a much smaller scale.  Or
    consider how Sierra Leone's society collapsed from an onslaught of ill-
    armed rebels a few years ago.
    As a more homely example, I offer my own experiences after the 1989
    magnitude 7.1 Loma Prieta earthquake, which killed about a hundred people
    in and around the heavily populated San Francisco Bay Area.  I was living
    in San Francisco at the time, and with the Bay Bridge out was concerned
    that food might not get into the city.  My worries evaporated however,
    when I realized that the artisanal Acme Bread Bakery of Berkeley, which
    had only recently started supplying one store in my neighborhood, was
    sending its trucks around the Bay via San Jose to keep up deliveries.  We
    were not only not going to starve, we would still have our luxuries.
    Contrast that to less technologically complex parts of Turkey, Iran,
    China, or Central or South America hit by similar magnitude quakes.  The
    death toll is often in the thousands or tens of thousands; food and water
    supplies disappear; disease runs rampant.
    Your explanation, "modern, complex technologies succeed by wresting
    enormous stores of power from the natural realm, seeking to direct these
    powers in ways that are controllable and useful", is simply inapt.  The
    Internet, complex as it is, ought to be subject to that definition, but
    just isn't.  Even as a description of a skyscraper, the thought seems
    tortured, at best.  The World Trade Center might well have been ugly and
    dehumanizing, but the reason it collapsed had to do less with its
    "wresting stores of power from the natural realm" than an inadequate fire-
    proofing system, inadequate concern for safety, etc.
    To some extent these problems and some others that you mention can be
    ascribed to trust.  On the other hand, as you hint, our normal trust that
    mild levels of security are all that are needed for our safety from
    terrorists have mostly proved accurate.  But the system just is not as
    vulnerable as you claim to the kind of attacks we witnessed. Saying
    otherwise feeds into the anti-terrorist hysteria.  (I'm not saying no one
    will die, but even thousands of deaths, though horrible, are not the same
    as system breakdown.)
    If you were to argue that the complexity of modern life dumbfounds the
    electorate I think you would have something.  Part of what you call trust
    is simply inattention resulting from the simple impossibility of taking
    seriously all the issues, etc., that seem to call for attention.
    (Some of the pieces, including those about terrorism, on my website
    http://www.well.com/user/mgoldh/ might be relevant.)
    Response to Goldhaber
    From:  Langdon Winner (winner@rpi.edu)
    Michael Goldhaber seems perfectly comfortable with the level of security
    that our complex technologies offer.  Evidently, he sees no need for any
    fundamental change in the way we design, build and operate the complex
    systems that support our way of life.  Even  after a major earthquake, he
    notes, we can still rely on these helpful mechanisms of production and
    distribution to click into operation, allowing us to "have our luxuries."
    As he makes his case, Goldhaber engages in selective misreading of my
    piece.  My comment about Rome, for example, explicitly avoids speculation
    on the ultimate causes of the fall of the empire and its timing.  It
    highlighted the fact that the idea of attacking links of a crucial
    material infrastructure was not a new idea; Vitiges was aware of the value
    of this strategy and did, in fact, employ it successfully against the
    city.  Perhaps I went a bit over the top by including Gray Brechin's
    colorful comment on the fate of the Roman caput mundi.  Brechin's book is
    all about the strategies of imperial control and how they eventually come
    to grief.  At a time in which techno-imperial schemes of military might
    and economic globalization are precisely the ones favored by American
    leaders, an echo of vulnerability from ancient times seemed worth noting.
    Goldhaber objects to my emphasis that "modern technologies succeed by
    wresting enormous powers from nature."  He says that neither the Internet
    nor the skyscraper fit that definition.
    What is he thinking?
    Has he forgotten that equipment of computing and communication that
    comprise the Internet are produced from natural materials transformed
    through energy- and resource-intensive processes of chemical, electrical
    and industrial fabrication?  What about the vast amounts of coal, gas, and
    oil burned, as well as atoms split, each day to transmit those trillions
    of bits of information?  During my stay in southern California during the
    blackouts of February, 2001, my Internet connection certainly went down.
    As regards skyscrapers, is not the widely admired feat in their
    construction a mastery of nature expressed in the materials and
    ingeniously balanced structures that defy the force of gravity?  In the
    wake of 9/11 architects and engineers are thoroughly reexamining the
    assumption and methods that were previously state of the art.
    My point was that arrangements of energy and resources that make large-
    scale systems apparently stable and useful also makes them precarious, a
    fact overlooked in the glory days of American modernism.  As our
    dependency upon these technologies grows, protecting their structure and
    operation could well emerge as society's most urgent priority, even when
    that involves the sacrifice of key social and political ends.
    The alarm sounds repeatedly.  A recent report by the Council on Foreign
    Relations, "America Still Unprepared -- America Still in Danger," warns,
    "The homeland infrastructure for refining and distributing energy to
    support our daily lives remains largely unprotected to sabotage."
    Stressing many of the areas I mentioned in my piece -- containerized
    cargo, transportation, food supply -- the report recommends an intense
    build up of police, surveillance, the National Guard, as well as the
    nation's public health and agricultural agencies.  Spend more money!
    Mobilize all of society!
    Perhaps Michael would agree that we should reject this obsession and begin
    seeking other paths to security.  My own view is that local, resilient,
    relatively small-scale, community-based technologies that rely upon
    renewable energy and resources provide the best solutions in the years
    ahead.  In the short run we need to ask whether the sense of emergency
    requires dismantling central features of the Constitution and erecting
    fierce new institutions that endanger our freedoms far more than they
    threaten Al Qaeda.
    I appreciate Michael's response, but do not share his feeling that
    technological systems as we find them are probably good enough.  In these
    dark and darkening times we need a much broader debate about what safety
    and security require.
    Goto table of contents
                              ABOUT THIS NEWSLETTER
    Copyright 2002 by The Nature Institute.  You may redistribute this
    newsletter for noncommercial purposes.  You may also redistribute
    individual articles in their entirety, provided the NetFuture url and this
    paragraph are attached.
    NetFuture is supported by freely given reader contributions, and could not
    survive without them.  For details and special offers, see
    http://netfuture.org/support.html .
    Current and past issues of NetFuture are available on the Web:
    To subscribe or unsubscribe to NetFuture:
    Steve Talbott :: NetFuture #138 :: November 7, 2002
    Goto table of contents

  • Goto NetFuture main page