Technology and Human Responsibility

Issue #149                                                 August 28, 2003
                 A Publication of The Nature Institute
           Editor:  Stephen L. Talbott (

                  On the Web:
     You may redistribute this newsletter for noncommercial purposes.

Can we take responsibility for technology, or must we sleepwalk
in submission to its inevitabilities?  NetFuture is a voice for
responsibility.  It depends on the generosity of those who support
its goals.  To make a contribution, click here.


Quotes and Provocations
   Who Needs Media Regulation?
   Commerce as Storytelling

From HAL to Kismet: Your Evolution Dollars at Work
   Further commentary on Rodney Brooks' Flesh and Machines

The Surprising New Language of Mechanism
   Are mainstream scientists getting religion?


About this newsletter


                         QUOTES AND PROVOCATIONS

Who Needs Media Regulation?

The current proposals for media deregulation are provoking widespread
criticism to the effect that we already have too much concentration in the
media industry, and deregulation will only make it worse.  As I heard
someone say on public radio a half hour ago, "We need more voices out
there, not fewer".

Perhaps so.  And perhaps deregulation will lead to greater concentration.
But even if this is true, the objection is hopelessly one-sided.

We do not lack voices in the world.  There are billions of them.
What many of them lack is someone listening.  A voice failing to
attract listeners cannot be much of a public voice.  If in fact there
are few prominent voices in a largely open society such as ours, it is
substantially because most of us are willing to listen to few voices, or
to few different kinds of voices.  We prefer People magazine to Time, and
Time to the Economist -- and all of these to, say, NetFuture.  (Just
kidding.  In any case, you are the one audience for which that statement
is certainly false.)  No re-jiggering of regulations will suddenly alter
these preferences.

I won't deny that there's an unhealthy concentration of interests in the
media business.  But this fact is inseparable from another one:  there is
an unhealthy concentration of shallow interests in the general public.  I
don't see how we can begin to make progress against the social problems we
face without first accepting the kind of double-sidedness evident in this
particular issue.  It is intrinsic to the organic character of modern
society that virtually every problem needs to be recognized not only at
some focus we can point to "out there", but also at a focus "in here", in
ourselves.  To deal only with the "in here" is pietism; to deal only with
the "out there" is projection.  What we really need is lively attention to
the dynamic interplay between "in here" and "out there".

(This happens to be the same challenge we face in our efforts to
understand the physical world, as discussed in this issue's concluding
feature article.  Our social problems and our misunderstandings in science
have the same root in our habits of thought.)

Related article:

"Are Corporations to Blame?" in NF #143:

Commerce as Storytelling

There's a nice little essay by Rebecca Solnit in the July/August issue of
Orion.  Titled "The Silence of the Lambswool Cardigans", it begins with

   There was a time not so long ago when everything was recognizable not
   just as a cup or a coat, but as a cup made by so-and-so out of clay
   from this bank on the local river or woven by the guy in that house out
   of wool from the sheep visible on the hills.  Then, objects were not
   purely material, mere commodities, but signs of processes, human and
   natural, pieces of a story, and the story as well as the stuff
   sustained life.  It's as though every object spoke -- some of them must
   have sung out -- in a language everyone could hear, a language that
   surrounded every object in an aura of its history.

But then,

   Somewhere in the Industrial Age, objects shut up because their creation
   had become so remote and intricate a process that it was no longer
   readily knowable.  Or they were silenced, because the pleasures of
   abundance that all the cheap goods offered were only available if those
   goods were mute about the scarcity and loss that lay behind their
   creation.  Modern advertising -- notably for Nike -- constitutes an
   aggressive attempt to displace the meaning of the commodity from its
   makers, as though you enter into relationship with very tall athletes
   rather than, say, very thin Vietnamese teenagers when you buy their
   shoes.  It is a stretch to think about Mexican prison labor while
   contemplating Victoria's Secret lavender lace boycut panties.

All this will, by many, be dismissed as a pining for the romantic "olden
days", which were probably never quite as we imagine them.  But rejecting
romanticized images of the past (if that is indeed what is needed) does
not require us to found our own society upon systematic falsehood.  After
all, commercial objects are not really mute.  Everything we create really
does speak in one way or another.  The only question is whether we prefer
to hear its tale or else drown out the truth with a cheap fantasy.

As for how we might nudge ourselves toward the truth, Solnit cites
activist Carrie Dann to the effect that "everyone who buys gold jewelry
should have the associated spent ore delivered to their house.  At
Nevada's mining rates, that would mean a hundred tons of toxic tailings
for every one-ounce ring or chain you buy".

A good point.  I do wonder, though, at the rather-too-absolute moral
judgment implicit in such a statement.  Each of us is unavoidably
implicated in the practices of the larger society, and we would have to
kill ourselves in order to avoid all taint.  I don't think I could say to
the artisan who feels compelled to work with gold, "You categorically
should not do so".  Even the activist must sometimes employ products of
the unhealthy system she is working to change.

In any case, Solnit sees farmer's markets as true meeting places where
objects tell worthy stories.  And even at the mall she is cheered by "the
ways people are learning to read the silent histories of objects and
choosing the objects that still sing".

There is in all this, I think, a suggestion for understanding the
historical necessities pressing upon us.  In all respects (and not just
commercially) things have been losing their plain, given speech.  But this
in itself is not bad.  It leads to deeper creative responsibility on our
part.  Today we find ourselves called not only to discover, but also to
help determine, what nature itself says -- not arbitrarily, but by
learning to sing our own melody above the world's deeper harmonies.

Related articles:

"Cheap Food at Any Cost" in NF #143:

"Branding the Branders: Turnabout Is Fair Play" in NF #120:

"The World Trade Organization: Economics as Technology" in NF #106:


Goto table of contents



                              Stephen L. Talbott

In Flesh and Machines Rodney Brooks writes that January 12, 1992,
marked "the most important imaginary event in my life".  On that day in
the movie "2001: A Space Odyssey", the HAL 9000 computer was given life.
Of course,

   HAL turns out to be a murdering psychopath, but for me there was little
   to regret in that.  Much more importantly HAL was an artificial
   intelligence that could interact with people as one of them .... HAL
   was a being.  HAL was alive.

Brooks, who directs the prestigious Artificial Intelligence Laboratory at
MIT, goes on to speak of his protégée, Cynthia Breazeal:

   On May 9, 2000, Cynthia delivered on the promise of HAL.  She defended
   her MIT Ph.D. thesis about a robot named Kismet, which uses vision and
   speech as its main input, carries on conversations with people, and is
   modeled on a developing infant.  Though not quite the centered,
   reliable personality that was portrayed by HAL, Kismet is the world's
   first robot that is truly sociable, that can interact with people on an
   equal basis, and which people accept as a humanoid creature ....
   People, at least for a while, treat Kismet as another being.  Kismet is
   alive.  Or may as well be.  People treat it that way.

The Human Response

All this occurs in a chapter entitled "It's 2001 Already" -- which may
rouse your curiosity about Kismet's abilities.  The robot's "body" is
nothing but a "head" mounted on a mobile platform.  Its dominant feature
consists of two realistic, naked eyeballs, which are accompanied by rough
indications of ears, eyebrows, and mouth.  These are all moved by small

Kismet (who has been featured in virtually all the major journalistic
venues) is widely advertised as a sociable robot.  Brooks tells us that it
gets "lonely" or "bored" due to a set of "internal drives that over time
get larger and larger unless they are satiated".  These drives are
essentially counters that tabulate, in relation to time, the number of
interactions the robot has with moving things, or things with saturated
colors ("toys"), or things with skin colors ("people").

Kismet also has a "mood", which can be affected by the pitch variations in
the voices of people who address it.  Brooks speaks of the automaton as
being "aroused", "surprised", and "happy" or "unhappy" -- the emotional
state in each case being another name for a numerical parameter calculated
from the various environmental signals the robot's detectors are tuned
for.  Despite Brooks' easy references to conversation, Kismet is not
designed to reckon with the cognitive structure of speech, and its own
speech consists of nonsense syllables, pitch-varied to suggest emotion.

So the typical scenario has Kismet patrolling a hallway, detecting motion
(probably a person), and approaching the moving object.  Its detectors,
software, and motors are designed to enable it to make appropriate eye
contact and to engage in emotionally suggestive, if otherwise vacuous,
conversation.  First encounters with Kismet tend to be marked by surprise
(genuine, at least on the human side), which leads to all sorts of
interesting and peculiar human-robot interaction.

This, in turn, seems to provide the developers with great satisfaction;
if people respond to Kismet in some way as to a sentient creature, then
Kismet must somehow be a sentient creature -- "not quite" HAL, as Brooks
modestly allows, but apparently close enough for government (or MIT) work.

We are led back, then, to Brooks' observation that "Kismet is alive.  Or
may as well be.  People treat it that way".  This, as nearly as I can
tell, is just about the entire substance of his argument that robots are
living creatures.  He periodically acknowledges that his own robots
currently lack certain creaturely capacities, but, hell, people sure seem
to regard them as alive, so what's the difference?

How Do You Simulate Life?

At one point Brooks seems about to launch an inquiry into the reality of
the matter.  "It is all very well for a robot to simulate having
emotions", he writes,

   And it is fairly easy to accept that the people building the robots
   have included models of emotions.  And it seems that some of
   today's robots and toys appear to have emotions.  However, I
   think most people would say that our robots do not really have

Brooks' response to this line of thought is to draw on a cliché of
artificial-intelligence literature:  he compares airplanes with birds.
Although planes do not fly in the manner of birds -- they neither flap
their wings nor burn sugar in muscle tissue -- we do not denigrate their
performance as a mere simulation of flying.  They really do fly.  So
Brooks wonders, "Is our question about our robots having real emotions
rather than just simulating having emotions the same sort of question as
to whether both animals and airplanes fly?"

He seems reluctant to state his answer directly, but his argument
throughout Flesh and Machines makes it clear that he equates "life-
like" with "alive", even if that means, rather mysteriously, "alive in a
different way".  In speaking of Genghis, a primitive, insect-like robot,
he tells us that the software and power supply transform a "lifeless
collection of metal, wire, and electronics" into an "artificial creature":

   It had a wasplike personality:  mindless determination.  But it had a
   personality.  It chased and scrambled according to its will, not to the
   whim of a human controller.  It acted like a creature, and to me and
   others who saw it, it felt like a creature.  It was an artificial

"If it feels like one, it must be one" seems to be how the argument goes.
Not much interest in distinctions here.  Nor much timidity.  "Kismet is
not HAL", Brooks concedes, "but HAL [who could 'never be your friend'] was
not Kismet either.  Kismet gets at the essence of humanity and provides
that in a way that ordinary people can interact with it".

The essence of humanity?  Brooks lives in a world of excruciating and
embarrassing naïveté -- a world where a child's doll programmed to say
it is hungry somehow has genuine "wants" and "desires", and where a
robotic insect programmed to follow sources of infrared can be said to be
hunting "prey".  And if any unwelcome doubts should arise, they can be
dispelled by all those humans who react to the robots as if they harbored
intelligence and feelings.

Missing Authors

Brooks could have risen above this naïveté had he been willing to reckon
with the obvious distinction between artifact and artificer.  Yes, his
robots harbor intelligence, and yes, people respond to this intelligence
-- just as they respond to the intelligence in a printed text or in the
voice output of a radio loudspeaker.  In each of these cases we would
be crazy to ignore the meaning we are confronted with.  After all, just
as a vast amount of cultural and individual expression lies behind the
development of the alphabet and the printing of the text on the page,
so also a great deal of analysis and calculation lies behind the
formulation of the computational rules governing Kismet's actions.
To ignore Kismet would be to ignore all this coherently formulated
human intention.  We could not dismiss what humans have invested in
Kismet without dehumanizing ourselves.

The problem we face with robots is that the text and voice have now been
placed in intimate relation with moving machinery that roughly mimicks the
human body.  And whereas the authors behind the words of book and radio
can easily be imagined as historically existent persons despite being less
concrete and more remote than face-to-face conversants, this is not the
case with the robot.  Here the authors have contrived a manner of
generating their speech involving numerous layers of mediating logic
behind which it is difficult to identify any particular speaker.

What, then, can we respond to, if not the active, gesticulating thing in
front of us -- even if the response is only one of annoyance?  The
speakers have vanished completely from sight, and yet here we are back in
an apparently face-to-face relationship! -- a relationship with something
that clearly is a bearer of intelligence.  Far easier to assign the
intelligence solely to the machine than to seek out the tortured pathway
from the true speakers to the speech we are witnessing.

This, incidentally, captures on a small scale the problem we face in
relating to the dictates of society as a whole.  Who is the speaker behind
this or that bureaucratic imperative?  It is often almost impossible to
say, so we are content to grumble about a personalized "System" that
begins to take on a machine-like face.  And the System is personal,
inasmuch as intentional human activity lies behind all its manifestations,
even if this activity has been reduced according to our own mechanizing
tendencies.  In other words, society itself is unsurprisingly assuming the
character of our technology.

None of this, however, excuses our failure to make obvious distinctions in
principle.  Yes, every human creation is invested with intelligence in one
form or another, and it would be pathological for us to ignore this fact
in our reactions.  But it is also pathological to fail to recognize the
asymetrical relation between artifact and artificer.  (This was the
primary point of "Intelligence and Its Artifacts" in NF #148, which was
actually begun as a response to Brooks' book.)

For all our difficulty in identifying the authors behind a computer's
output, we can hardly say that no authoring has gone on, or that the
distinction between the authors and the product of their authoring has
somehow been nullified.  Difficulty in tracing authorship does not by a
single degree elevate a printed page to the status of author in its own
right.  If Brooks wants to argue that Kismet, once spoken by its creators,
was somehow transformed from speech into speaker, he needs to make the
argument.  Instead he simply ignores the distinction in all its

Let me put it this way:  if Brooks acknowledges a difference in kind
between the intelligence of an author and that of a printed page, or
between the intelligence of an engineer and that of a doorbell circuit,
then he owes us an elucidation of how this distinction plays out in his
robots.  If there is something intrinsic to the idea of complexity or the
idea of moving parts that negates or overcomes the distinction --
something that transforms text into author, designed mechanism into
designer -- then we need to know what this something is.  What is the
principle of the transformation?

Learning from Kismet

In an interview with a New York Times reporter (June 10, 2003), Kismet's
creator, Cynthia Breazeal, remarks that "human babies learn because adults
treat them as social creatures who can learn".  Her hope for Kismet was
that "if I built an expressive robot that responded to people, they might
treat it in a similar way to babies and the robot would learn from that".

The Times reporter then asked the obvious question:  "Did your robot
Kismet ever learn much from people?"  This was Breazeal's answer:

   From an engineering standpoint, Kismet got more sophisticated.  As we
   continued to add more abilities to the robot, it could interact with
   people in richer ways.  And so, we learned a lot about how you could
   design a robot that communicated and responded to nonlinguistic cues;
   we learned how critical it got for more than language in an interaction
   -- body language, gaze, physical responses, facial expressions.

   But I think we learned mostly about people from Kismet.  Until it, and
   another robot built here at MIT, Cog, most robotics had little to do
   with people.  Kismet's big triumph was that he was able to communicate
   a kind of emotion and sociability that humans did indeed respond to, in
   kind.  The robot and the humans were in a kind of partnership for

I'm glad Kismet taught Breazeal and her engineering colleagues that bodily
expression plays an important role in human communication.  But as for the
issue at hand:  her answer tells us nothing about any actual "partnership
for learning".  With an all too characteristic slippage between points of
view, she answers a question about Kismet's learning by citing only the
engineers' learning.  This would be all to the good if she could keep the
two perspectives distinct and get clear about them.  But the whole
enterprise depends upon confusion.  And so Breazeal concludes the
interview by mentioning that she is now working on a new robot, Leonardo.
But Kismet, who has been retired to the MIT museum, "isn't gone; it's just
now taking the next step in its own evolution through Leonardo".

But what does this mean, "its own evolution"?  Presumably Kismet is
sitting on a shelf in the museum, or else moving about and pestering
visitors.  The one thing it's not busy doing is evolving.  That is the
engineers' task.  Apparently, the grotesque illogic of saying that Kismet
is evolving is a small matter for someone who has already managed to
convince herself that a handful of numerical parameters are signifiers of

It seems to me profoundly significant that so many people today can
routinely characterize the engine rather than the engineer, the design
rather than the designer, the speech rather than the speaker, as the
subject of evolution.  Here is a refusal to face ourselves as creative
spirits or as anything more than machines, followed by a projection of our
missing selves onto our machines.  Such a refusal and projection can only
lead, not to the evolution of machines, but to the end of our own

Related articles:

"Flesh and Machines: The Mere Assertions of Rodney Brooks" in NF #146:

"Intelligence and Its Artifacts" in NF #148:

See also the articles listed under "Artificial intelligence" in the
NetFuture topical index:

Goto table of contents



                              Stephen L. Talbott

One of the significant symptoms of our era is a certain consistent
slippage of language -- a slippage that testifies to great confusion while
also hinting at the possibility for a vital shift in our thinking.  Here I
will set down only a brief, suggestive sketch of the matter.

Self-Replication.  To begin with, consider the term, "self-replication".
DNA, we are repeatedly told, is self-replicating.  But it is not.  Anyone
can confirm this with about five seconds of reflection.  As Harvard
geneticist Richard Lewontin remarks, DNA "is manufactured out of small
molecular bits and pieces by an elaborate cell machinery, made up of

   If DNA is put in the presence of all the pieces that will be assembled
   into new DNA, but without the protein machinery, nothing happens.  What
   actually happens is that the already present DNA is copied by the
   cellular machinery so that the new DNA strands are replicas of the old
   ones.  The process is analogous to the production of copies of a
   document by an office copying machine, a process that would never be
   described as "self-replication".

That is, we would never place a page of text on the copying machine and
claim it is replicating itself.  Yet that is exactly what we hear all the
time of DNA.  In reality, even the copy-machine metaphor has its limits.
Lewontin reminds us that any copying machine that was as prone to
"mistakes" as the DNA copying process would soon be discarded.

(You will also find "self-replication" bandied about crazily in
discussions of nanotechnology.  But that deserves a separate discussion.)

Information.  Look next at "information".  This now turns up everywhere,
including the names of entirely new disciplines such as bio-informatics.
Yet there is almost nothing but obscurity in this usage.  The word
"information" receives its aura of authority from the technical theory of
communication, but the information of this theory has nothing to do with
meaning or anything we normally think of as informative.  In fact, random
noise possesses the highest possible amount of information, according to
the theory.  This technical sense of the word is not what people usually
have in mind when they import "information" into their scientific

In the dominant usage, "information" derives its apparent explanatory
force from the fact that it suggests meaning -- the kind of meaning we
normally think of as contained in text and messages.  And so we hear about
the genetic "text" and "messenger" RNA.  Yet the absurdities of this usage
have become notorious.  In science historian Lily Kay's summary, the human
genome turns out to be "an authorless book of life written in a speechless
DNA language".  It is impossible to specify the meaning of its supposed
messages in conventionally acceptable scientific terms.  This is hardly
surprising since the very idea that the world possesses meaning has long
been anathema within science.

To use the word "information" in connection with genetics is to employ
explanatory weaponry that the prevailing science cannot begin to justify
-- rather as parents will sometimes answer a child's query about this or
that by saying "God made it".  How does this or that protein arise?
"Because a gene contains the information for it" -- although in fact all
the workings and all the wisdom of the organism as a whole (which this
explanation neatly co-opts) are required for the synthesis of the protein.

The appeal to information is a cheap substitute for understanding; it
allows the scientist to project whatever will eventually become the true
understanding (based on a qualitative grasp of the whole organism) onto a
set of imagined mechanisms that are grossly inadequate to bear the burden
assigned to them.  This "saves" the mechanistic way of thinking.

Self-Organization.  Then there is "self-organization", a favorite term of
complexity theorists.  The world exhibits patterns of order in continual
transformation, and "self-organization" is commonly applied to particular
processes of transformation in order to explain whatever newly arises.
It adds to the traditional notion of order the further idea that there
is some sort of self organizing itself.  This self is assigned the
character of a machine.  Of course, for all the machines we actually
know, a designer, separate from the machine, is required to assemble,
organize, and coordinate its parts.  The term "self-organization" allows
us to project this designer, ever so vaguely, into the machine itself.

I say "vaguely" because the mechanical self said to be responsible for the
organizing is never properly identified or described.  In his book on
Complexity, Mitchell Waldrop describes "self-organization" this way:

   In every case groups of agents seeking mutual accommodation and self-
   consistency somehow manage to transcend themselves, acquiring
   collective properties such as life, thought, and purpose that they
   might never have possessed individually.

Unfortunately, the main burden of explanation here is vested in that
casually spoken "somehow" -- things somehow manage to transcend

In the same book Waldrop characterizes self-organization as "matter's
incessant attempts to organize itself into ever more complex structures,
even in the face of the incessant forces of dissolution described by the
second law of thermodynamics".  But we are never told how matter, which is
supposedly mindless, manages to "attempt" anything at all on behalf of its
own "self".

Evolution.  And, again, we have "evolution".  We routinely hear of the
transition from one generation of technology to the next as an evolution
of the technology.  But in this case there is no subject for the verb
"to evolve", no entity with the continuity that would enable it to do the
evolving.  One machine does not condense itself into a seed for the next,
but rather is simply replaced by another one.  What evolves is the design
and productive capacity of the engineers.  You might think that engineers
would be capable of recognizing when their use of a key verb lacks a
coherent subject, but this rarely seems to be the case.  Technology as
such -- technology as a kind of reified and personified abstraction --
appears to be what people have in mind when they speak about the evolution
of machines.  But abstractions do not evolve, even if our thinking about
them does.

Of Mechanisms and Designers

It's odd how, in a culture that manages the most amazing feats of
precision when organizing the intricate logic of a million transistors
on a sliver of silicon, we routinely tolerate vagueness and gross
contradiction in the decisive terms of our understanding.  But there is
another way to look at the matter.  I think we can recognize a certain
consistency in the various slippages of meaning cited above.  In each
case there is a projection of life and mentality onto a supposedly
mechanistic process.  "Self-replication", of course, suggests a living,
reproducing organism.  "Information", in its common usage, pertains to
what is communicated between minds.  "Self-organization", on its face,
invokes the idea of a self.  "Evolution" derives its main force from
theories about the growth and development of organisms.

There are other terms I could have cited as well -- for example,
"emergence", "system", "pattern", and "meme".  In all such cases, whether
obviously or subtly, there is an increasing tendency to ascribe life and
thought to the stuff of the world.  But this ascription occurs in the
context of mechanistic assumptions amounting to a denial of life and
thought.  That is why the contradictions we have observed are inevitable.

The mechanistic view is borne of our experience with machines and, as I
mentioned, every machine we have ever known requires an external designer.
Unwilling (with good reason) to accept such a designer for nature, but
still committed to the machine model, the mechanistic thinker ends up
trying in one way or another, and against his own better judgment, to
smuggle a designer into the machinery without having to acknowledge it.
This is what we have seen with the various terms discussed above.  It all
makes for a horrible muddle.

Incidentally, those committed, for religious reasons, to theories of
"intelligent design" find themselves in the same muddle.  This reflects
the degree to which the mainstream religions have bought into the
mechanistic assumption that the world is a machine.  And this in turn is
intimately related to the fact that these religions have heavily tilted
toward divine transcendence at the expense of immanence.  So instead of
being projected into the machinery, the creationists' Designer stands
wholly outside the world, perhaps occasionally tinkering with it as with
a machine -- which is to say, perhaps occasionally throwing a wrench into
the works -- but otherwise not manifesting in or through the world.  There
can be nothing living and organic about such a vacated, tinkerable

The problem is with the machine model as such, and it hardly matters
whether we try to conceal the designer in the mechanism itself or instead
remove him so far beyond the creation as to be irrelevant.  Since the
world is much more a living organism than a designed machine, every
attempt to introduce a machine-designer -- whether hidden in the workings
of the machine or hidden in the infinite beyond -- is disruptive to our
understanding of the world's meaningful unity.

Taking the New Language Seriously

So what are we to make of the strange willingness of mechanistic thinkers
today (in essential harmony with their creationist opponents) to invoke
the terminology of life and mind in their attempt to understand a
mechanically conceived world -- terminology that several decades ago would
have been decried as vitalist or religious or mystical?  Personally, I
believe it is a hugely important development, probably marking a great
divide in the history of science and civilization.  Never again will it be
possible to speak about the world except in the language of intelligence,
meaning, life, thought.  The historical aberration whereby science sought
to apprehend the world in non-living terms is coming to an end.

There is wonderful positive potential in all this.  At least there is
if we can take the ideas of information and self-organization with the
seriousness they deserve rather than incongruously marrying them to
mechanistic models.  Then we may be led to a genuine conviction that the
world's phenomena are informed, that intelligent and productive powers
of organization really do lie at the heart of these phenomena, and that
if we look deeply enough into them, we will discover subjects capable
of evolving.  In other words, we may be led in our own highly developed,
critical, and scientific fashion to rediscover what the ancients already
knew:  we live in an ensouled world.

This will require us to take the abused language of the engineer and
scientist with all seriousness, redeeming it by restoring it to its
full dignity.  Which means, not just taking advantage of the assumption
that the world is alive and full of meaning where this is a convenient
way to cover our ignorance, but also reckoning with the assumption --
understanding how it could really be so, and what pathological thought
habits of the past several hundred years have until now put the truth
out of sight.

One place to begin looking for the pathology would be in the seventeenth-
century decision within science to ignore qualities.  This was to ignore
precisely that expressiveness through which life and consciousness are
revealed in the world.

Secondly, we should look at the established practice of doing science
while systematically ignoring the one who is doing the doing -- the
observer who is observing, the experimenter who is experimenting, the
theorizer who is theorizing.  This was arbitrarily to remove all genuine
intention, acting, and becoming from our world picture.

Both these stances -- which amount, first, to discarding the entire world,
and then to discarding ourselves -- startlingly violated the spirit of
science.  This spirit demands that we close our eyes to nothing, least
of all to the foundational prerequisites for the things we do bother to
observe and theorize about.  It is no great wonder that the thinking,
perceiving, and acting selves we ignored should have fallen out of our
picture of the world, leaving unsightly holes and forcing us into
contradiction when we use such terms as "self-organization" and

Staying Clear of Metaphysics

The problem is that the overwhelming inertia of institutional science
weighs in favor of preserving the commitment to mechanism, with all
its difficulties.  We can be quite sure what we will encounter along
this path.  The practitioners and theoreticians of mechanism will more
and more employ the language of life and mind in order to be true to
what they discover in the world.  But in order to save their beloved
mechanisms, they will blind themselves to the reality -- the immediately
experienceable reality -- of their own lives and minds.  They will then
say (and once the blindness sets in, no one will be able to refute them),
"We conceive our own thinking to be a thoughtless, mechanical activity
essentially devoid of conceiving".

But there is something to be said for the instincts of the mechanistic
scientists even here.  They do not want to acknowledge the reality of mind
and thinking because it raises the specter of a mysterious "other realm"
unrelated to the familiar world as they have conceived it.  This avoidance
of metaphysics in science is healthy.  But what their own new language is
telling them is that they have not conceived the world soundly, and that
mentality is not a mysterious and separate realm.  Their only reason for
thinking otherwise is that they have for so long accepted the Cartesian
split.  Invest your life in sketching a world in terms of the non-mental
half of the Cartesian dichotomy, and you will naturally find that what you
have sketched is alien to mentality.

But why were we forced to accept the split in the first place?  To move
forward, we first need to take a long step backward so as to refuse the
Great Cleavage before it can swallow up all coherent thought about the
world.  Then we may begin to discover that, far from being alien to the
world, mind and thought are the very matrix out of which it crystallizes
-- and that (as the new language of science is nudging us to realize) not
even the most adamantine elements of this world are wholly other than the
informing, productive, organizing matrix from which they arose (or

Some day before long we will come to wonder how we could ever have taken
the thinking through which we know the world -- through which it declares
itself to us -- and relegated it to some distant "meta" realm.  But before
that day comes we will first have to notice the world's thinking in the
one place where we have most direct access to it, which is in ourselves.
This is much more a meditational task than a scientific one in the usual
sense of "science".

This, of course, could just as well be the opening as the conclusion of a
long essay.  But at least it indicates the direction in which I believe we
can salvage the otherwise hopelessly misleading language of life and
meaning that is progressively infiltrating the mechanistic sciences.

Related articles:

"When the Mind Dogmatizes about Itself" in NF #148:

"Are Machines Living Things?" in NF #133.  A dialogue with Kevin Kelly:

Goto table of contents


                          ABOUT THIS NEWSLETTER

Copyright 2003 by The Nature Institute.  You may redistribute this
newsletter for noncommercial purposes.  You may also redistribute
individual articles in their entirety, provided the NetFuture url and this
paragraph are attached.

NetFuture is supported by freely given reader contributions, and could not
survive without them.  For details and special offers, see .

Current and past issues of NetFuture are available on the Web:

To subscribe or unsubscribe to NetFuture:

Steve Talbott :: NetFuture #149 :: August 28, 2003

Goto table of contents

This issue of NetFuture: