Technology and Human Responsibility

Issue #151                                                October 30, 2003
                 A Publication of The Nature Institute
           Editor:  Stephen L. Talbott (

                  On the Web:
     You may redistribute this newsletter for noncommercial purposes.

Can we take responsibility for technology, or must we sleepwalk
in submission to its inevitabilities?  NetFuture is a voice for
responsibility.  It depends on the generosity of those who support
its goals.  To make a contribution, click here.


Editor's Note

Quotes and Provocations
   A New Assessment of Computers in the Classroom

The Vanishing World-Machine (Stephen L. Talbott)
   Habits of the Technological Mind #2


Announcements and Resources

About this newsletter


                              EDITOR'S NOTE

The feature article in this issue will not be for all NetFuture readers.
It is part of an attempt on my part to capture, in a slightly more
disciplined and systematic way (so far as the scope of a mere newsletter
allows) some of the basic issues over which Kevin Kelly and I wrangled in
our dialogue concerning mechanisms and organisms.  (See NF #133, 136, and
139.)  The upshot of the current essay is that mechanistic explanations
cannot even explain machines, let alone the natural world.  But, as I also
point out, the argument here connects with a whole range of fundamental
issues in the philosophy of science and technology.

On another note:  Some subscribers are no longer receiving NetFuture
because their spam blockers are either rejecting its "illicit" content, or
rejecting all mail from the St. Johns University listserver that
distributes the newsletter.  (The listserver is strictly administered and
includes only bona fide publications.  But since it handles many lists,
some institution-level blockers decide that it is producing a suspiciously
large amount of email.)

Also, when I send out NetFuture, I receive back more and more automated
requests from spam blockers inviting me to go to some website or other and
verify that NetFuture is a legitimate publication.  I never respond to
these requests.  Apart from the impracticality of the situation (imagine
hundreds or thousands of subscribers employing such blockers) I regard the
requests themselves as spam.  "Please come on over to my website and click
on a button or two" sure sounds like spam to me.

It's a good illustration of a point I frequently make:  the attempt to
automate solutions to human problems all too easily contributes to a
worsening of the problems, which may have arisen in the first place from
the automation of formerly personal transactions.


Goto table of contents


                         QUOTES AND PROVOCATIONS

A New Assessment of Computers in the Classroom

A substantial book, just released, may help to alter the tenor of the
public discussion about computers in the classroom.  It's called
Flickering Minds: The False Promise of Technology in the Classroom and How
Learning Can Be Saved.  Written by National Magazine Award winner, Todd
Oppenheimer, the book is sober, well-researched, thorough, and marked by
as much undeniable good sense as you are likely to find in any book about
education today.

I hope to have more to say about the book in the future.  Meanwhile, here
are some of Oppenheimer's conclusions:

** Computer technology will not go away.  The challenge for schools is to
reject fads and use the tools sensibly.  Generalizing:  "technology is
used too intensely in the younger grades and not intensely enough -- in
the proper areas -- in the upper grades".  The failure in the upper grades
is that students are given no deep understanding of the technology, but
instead are allowed to occupy themselves with the "hot programs of the

** The computer fad has temporarily blinded us to a central truth that has
been evident for thousands of years:  the crucial process in education is
not a technological one but a human one.  In the words of the Forbes
editor, Stephen Kindel, "the best schools will eventually recognize a fact
that's been apparent since Plato sat on Socrates' knee:  Education depends
on the intimate contact between a good teacher -- part performer, part
dictator, part cajoler -- and an inquiring student".  In the end, Kindel
added, "it is the poor who will be chained to the computer; the rich will
get teachers".

** "The computer industry [Oppenheimer writes] has managed to survive on
such a plethora of hype, habituating all of us to accept such a string of
unfulfilled promises that we've long since lost the ability to see what
new inventions really can and cannot do.  Schools as a result have become
industry's research-and-development labs as well as its dumping ground --
while asking very little in return.

** We are self-deluded if we delight in how computers enable students to
"take over" their own education.  "The students aren't taking over in most
of these classrooms.  The computer is".  It's hard for some observers to
recognize that "downloading a captivating live software applet from a NASA
site, which some web designer has loaded with a few earnest questions to
satisfy somebody's grant requirements, does not a satisfactory lesson
make.  Nor does simply writing a paper about this material, based on some
extra Internet 'research'".  As Ken Komoski, the director of an
educational-products watchdog group puts it, when a boy turns in a paper
today, "how would you know if he knows anything until you talk?"

** One study after another has shown that "whenever tutoring is matched
against some competing pedagogy, including technology, tutoring wins
handily".  In one "gold-standard" study, tutored students were found to
outdistance 98 percent of those taught in conventional group instruction.

** "With only a few exceptions, computer technology has become one more
feature on an already crowded landscape of high-stimulus consumer items --
TV, video games, pop music, action films, high-caffeine coffee shops on
every urban corner, the list goes on and on.  The primary function of that
topography is to keep people buying; a side effect is that it keeps people
perpetually hyped up and distracted from activities that might be more
soothing and reflective.  We have become, in a sense, a society of
masochists.  We bemoan youngsters' turning to violence while pouring
millions into making suffering human beings the stuff of their
entertainment.  We criticize them for their poor self-discipline and short
attention spans; then our commercial enterprises do everything possible to
crowd and fragment their minds still further".

** For all the publicity value of the slogan, "leave no child behind", the
policies associated with the slogan oddly ignore the libraries of insight
now available to educators about what makes children excel.  "One would
think that the nation's policy makers, armed with this information, could
come up with something better than a lengthier sheet of multiple-choice
questions, millions of new test essays, and a corps of evaluators who
don't have the skill, or the time, to do their job".  Or, I would add, the
necessary familiarity with the individual student.

** Finally, our most inspiring moments in school almost always turn out to
have revolved around great teachers -- a fact educators have recognized
since the beginning of formal education -- "yet in some bizarre act of
cultural sadomasochism, we continually pretend it isn't true.  We let
teachers twist in the breeze seemingly forever.  For decades, we have
taken people whom we hold responsible for the intellectual and moral
development of our children, put them in chaotic, overcrowded
institutions, robbed them of creative freedom and new opportunities for
their own learning, imposed an ever-changing stream of rules and
performance requirements that leave them exhausted and hopeless, and paid
them about $40,000 a year for their trouble -- far less, proportionately
speaking, than teachers earn in most other industrialized societies".

Related articles:

See the articles listed under the "Education and computers" heading in the
NetFuture topical index:


Goto table of contents


                       THE VANISHING WORLD-MACHINE
                   Habits of the Technological Mind #2

                            Stephen L. Talbott

During the Renaissance and scientific revolution -- so the conventional
story runs -- our ancestors began for the first time to see the world.
For inquirers such as Alberti, Columbus, Da Vinci, Gilbert, Galileo, and
Newton it was as if a veil had fallen away.  Instead of seeking wisdom in
a spiritual realm or in appeals to authority or in the complex mazes of
medieval ratiocination, the great figures at the dawn of the modern era
chose to look at the world for themselves and record its testimony.  It
was an exhilarating time, when the world stood fresh and open before them,
ripe for discovery.  And they quickly discovered that certain questions
could be answered in a satisfyingly precise, demonstrable, and
incontestable way.  They lost interest in asking how many angels can dance
on the head of a pin and turned their attention to the pin itself as a
physical phenomenon available for investigation.

There is some truth in this rather-too-neat view of the past -- a truth
that makes the central fact of our own era all the more astonishing:  as
scientific inquirers, we have shown ourselves increasingly content to
disregard the world around us.  Judging from the dominant, well-funded
scientific and technical ventures, we much prefer to navigate our own
arcane labyrinths of abstract ratiocination, whether they consist of the
infinitely refined logic we impress upon silicon, or the physicist's
esoteric classificational systems for subatomic particles, or the
universe-spanning equations of the cosmologists.  It's true that we no
longer ask, "How many angels can dance on the head of a pin?" but we are
certainly entranced with the question, "How much data can we store on the
head of a pin?"  And our trance is only deepened when the answer turns out
to be: "a hell of a lot".

What many haven't realized yet is how easily our preoccupation with the
invisible constructions on that pinhead blind us to the world we
originally set out to perceive and understand in its full material glory.
The alluring data, fully as much as any dancing angel, distracts us from
more mundane realities.  We have, as a result, been learning to ignore as
vulgar or profane the "crude" content of our senses.  This content may be
useful for occasional poetic excursions, but it is only a base temptation
for the properly ascetic student of science, who moves within a more
rarefied, mathematical atmosphere.  "It must be admitted", remarked the
British historian of consciousness, Owen Barfield,

   that the matter dealt with by the established sciences is coming to be
   composed less and less of actual observations, more and more of such
   things as pointer-readings on dials, the same pointer-readings arranged
   by electronic computers, inferences from inferences, higher
   mathematical formulae and other recondite abstractions.  Yet modern
   science began with a turning away from abstract cerebration to
   objective observation!  (1963)

It is not that we lack all interest in the material world, but only that
our interest is of a peculiar, one-sided sort.  Surrounded by the
remarkable physical machinery bequeathed to us by science, we are more
concerned with manipulating the world than with seeing it profoundly.  In
fact, the distinction between seeing and manipulating scarcely even
registers within science any longer.  Philosopher Daniel Dennett tells us
that the proper discipline of biology "is not just like engineering; it is
engineering.  It is the study of functional mechanisms, their design,
construction, and operation" (1995, p. 228).

It may seem counterintuitive at first, but I will argue that our
preoccupation with workable mechanisms, far from contradicting our
preference for abstract cerebration, is itself a primary symptom of our
flight into abstraction and our refusal to see the world.  And there is
every reason to believe that our failure to interest ourselves in seeing
and understanding the observable world (as opposed to manipulating it like
so much gadgetry) is as fateful for our knowledge enterprises today as
their own abstractions were for the inquiries of our medieval forbears.
One consequence of our failure is that we have felt justified in
substituting sadly inadequate mechanistic models for the world we no
longer bother to observe.

On Making a Game of Life

There is a computer program called the Game of Life.  The program divides
your computer screen into a fine-meshed rectangular grid, wherein each
tiny cell can be either bright or dark, on or off, "alive" or "dead".  The
idea is to start with an initial configuration of bright or live cells and
then, with each tick of the clock, see how the configuration changes as
these simple rules are applied:

** If exactly two of a cell's eight immediate neighbors are alive at the
   clock tick ending one interval, the cell will remain in its current
   state (alive or dead) during the next interval.

** If exactly three of a cell's immediate neighbors are alive, the cell
   will be alive during the next interval regardless of its current state.

** And in all other cases -- that is, if less than two or more than three
   of the neighbors are alive -- the cell will be dead during the next

You can, then (as the usual advice goes) think of a cell as dying from
loneliness if too few of its neighbors are alive, and dying from over-
crowding if too many of them are alive.

What intrigues many researchers is the fact that, given well-selected
initial configurations, fascinating patterns are produced as the program
unfolds.  Some of these patterns remain stable or even reproduce
themselves endlessly.  Investigations of such "behavior" have led to the
new discipline known as "artificial life".

Referring to the Game of Life and the three-part rule governing its
performance, Dennett has remarked that "the entire physics of the Life
world is captured in that single, unexceptioned law".  As a result, in the
Life world "our powers of prediction are perfect: there is no
[statistical] noise, no uncertainty, no probability less than one".  The
Life world "perfectly instantiates the determinism made famous by LaPlace:
if we are given the state description of this world at an instant, we
observers can perfectly predict the future instants by the simple
application of our one law of physics" (Dennett 1995, pp. 167-69).

These are startlingly errant statements from one of the most influential
philosophers of our day.  The three-part rule, after all, is hardly a law
of physics.  It is an algorithm -- roughly, a program or precise
recipe -- and its deterministic, LaPlacian perfection holds true only so
long as we remain within the perfectly abstract realm of the algorithm's
crystalline logical structure.  Try to embody this structure in any
particular stuff of the world, and its perfection suddenly vanishes.  For
example, if you install it in a running computer, you can be absolutely
sure that the algorithm will fail at some point, if not because of spilled
coffee or a power failure or an operating system glitch, then because of
normal wear and tear on the computer over time.  Contrary to Dennett's
claim, you will find in every physical implementation of this
algorithm that there is noise, no certainty, and no probability equal to

Dennett's comments about the Game of Life illustrate how the world can
disappear behind a grid of abstractions.  He is so transfixed by the
logical perfection of the algorithm that he loses sight of the distinction
between it and the real stuff that happens to embody it.  With scarcely a
thought he shifts in imagination from disembodied rules to physics -- a
move made easy by the fact that his physics is essentially a mere
reification of the rules.  This carries huge implications.  If, as he
tells us, biology is engineering, and if the devices we engineer are
nothing in essence but their algorithms, then real dogs, rocks, trees, and
people dissolve into a fog of well-behaved, abstract mentation.

While it is not the topic of this essay, the enveloping and thickening fog
of abstraction is evident on every hand.  Look, for example, at almost any
branch of the public discourse and you will find that its subjects -- the
elderly, the sick, victims of war, soldiers, political leaders,
terrorists, corporate CEOs, wilderness areas, oil wells, fetuses, doctors,
voters -- appear only as generalized debating tokens torn loose from
complex, full-fleshed reality.  Their assigned place in an established
logic of discourse is almost all that matters.  The public discussion then
becomes nearly as lifeless and predictable as the Game of Life.

The Externality of Machine Algorithms

We can, I believe, learn a great deal about certain tendencies of science
and society by looking more deeply into this symptomatic disappearance of
the material world into abstraction -- a disappearance that will seem as
strange to our successors as medieval attempts to understand motion by
ruminating over Aristotelian texts now seem to us.

It will help, in understanding these tendencies, to grasp as clearly as
possible the relation between an algorithm, such as the one embodied in
the Game of Life program, and the machine executing it.  And the first
thing to say is that the algorithm really is there -- in the machine (so
long as it is working properly) and therefore in the world.  Which is to
say:  we can articulate the parts of a machine so that, when viewed at
an appropriate level of abstraction, they "obey" and manifest the rules of
the Game of Life.

But it is crucial to see the external and nonessential nature of these
rules.  Yes, they are embodied in the machine -- but only in a rather
high-level and abstract sense.  The rules are not intrinsic to the
machine.  That is, they are not necessary laws of the copper, silicon,
plastic, and other materials.  To see the rules we almost have to blind
ourselves to the particular character of these materials -- materials that
could, in fact, be very different without altering the logic of
arrangement we are interested in.

In other words, the determining idea of the machine as a humanly designed
artifact is something we impose upon it "from the outside"; there is
nothing inherent in copper, silicon, and plastic that dictates or urges or
even suggests their assembly into a computer.  We had to have the idea,
and we had to bring it to bear upon the materials through their proper
arrangement.  The functional idea of the computer abides in this
arrangement, and will be there only as long as the arrangement holds.

This external relation between the material machine and the logic of the
idea imposed upon it explains a double disconnection.  On the one hand,
the logic fails to characterize fully the material entity it is associated
with.  We can construct computers out of vastly different materials and
still see exactly the same rule-following when they execute the Game of
Life.  The game's algorithm leaves its embodiment radically undefined (or

But just as differently constructed physical machines can, at a certain
level of abstraction, follow the same rules, so, too, the same machine can
be made to follow different rules.  This is obvious enough in the case of
a computer, which can execute entirely different software algorithms.  But
it remains true more generally:  when a new context arises, an existing
piece of technology may become a tool for a previously unforeseen function
-- as when, trivially, the handle of a screwdriver is used to hammer in a
tack, or when a typewriter's alphanumeric keyboard serves to construct
graphic images rather than text.  The underlying artifact remains
unchanged even as the rules of its use and its meaning as a tool change.

So we see that machines of completely different materials and
configuration can serve the same function, and a machine of given
materials and configuration can serve different functions.  The functional
idea, then -- whether it is a computer algorithm or the cleaning procedure
of a washing machine -- is by no means equivalent to a full understanding
of the machine as a part of the material world.  The parts of the machine
present us with a physical reality we are able to employ in a mechanical
construction, but our employment of them does not explain the physical
reality, nor does the physical reality explain the employment.

There is some irony in all this.  To recognize a governing idea externally
imposed upon the parts of a machine through the manner of their
arrangement is to grant an irreducibly human and subjective element in
every machine as a machine.  As Michael Polanyi remarked many years ago,
a knowledge of physics and chemistry can never tell us what a machine is
(1962, pp. 328-35).  For such an understanding we have to know (among
other things) something about the human context in which it will operate,
the human purposes it was designed to serve, and the particular functional
idea that guided the builders in coordinating the machine's physical

So while mechanistic thinkers profess a great fondness for objectivity,
which they interpret to mean "freedom from human influence", their
predilection for machine-based explanations marries them to human-
centered, designer-centered modes of thought.  In fact, for all their
tough-mindedness, it is they who indulge an unhealthy anthropocentrism.
The world, after all, is not a humanly designed machine.  Whatever
material principles we summon to account for the phenomena we observe,
they will fail in the accounting if they go no deeper than the mechanistic
principles we impose externally and abstractly upon pre-existing matter.

Does Mathematics Alone Give Us the World's Essence?

As I have already suggested, one indication of our tendency to ignore the
observable world lies in the force of our temptation, following Dennett,
to separate the machine's algorithm from the machine itself and then allow
the former to overshadow the latter.  The temptation is no small matter,
given the overwhelming commitment to machine-like explanations within
mainstream science.  For a mechanistic science, the machine's reduction to
an abstraction is the world's reduction to an abstraction.

Many are eager for the reduction.  Peter Cochrane, former head of research
at British Telecom, believes "it may turn out that it is sufficient to
regard all of life as no more than patterns of order that can replicate
and reproduce" (undated).  When Cochrane says "no more than patterns of
order", he seems content to let the substance manifesting these patterns
fall completely out of the picture.

Likewise, robotics guru Hans Moravec describes the essence of a person as
"the pattern and the process going on in my head and body, not the
machinery supporting that process" (quoted in Rubin 2003, p. 92).  And
Christopher Langton, founder of the discipline of artificial life, has
surmised that "life isn't just like a computation, in the sense of
being a property of the organization rather than the molecules. Life
literally is a computation" (quoted in Waldrop 1992, p. 280).

Could there be a clearer attempt to dissociate the world's essence from
sense-perceptible matter?  Pattern, algorithm, computation -- these
formal, mathematically describable abstractions are made to stand alone as
self-sufficient explanations of reality.  Economist Brian Arthur captures
a sentiment widespread within all the sciences when he remarks that to
mathematize something is to "distill its essence".  And if you've got the
essence, why bother looking for anything more?

The derivation of mathematical relationships can indeed be valuable.
But to leave the matter there -- and nearly all those who share Arthur's
sentiment about the mathematical essence of things do leave the matter
there -- is to reveal a blind spot at least as gaping as any irrational
lacuna in the thought of our ancestors.  After all, mathematics in
its purely formal exactness tells us nothing at all about the material
world until it is brought into relation with the stuff of this world.
And we can establish this problematic relation -- we can have more than
our pure mathematics in mental isolation -- only by understanding the
non-mathematical terms of the relation.  Mathematics alone cannot tell
us what the mathematics is being applied to.

This is already evident with simple, quantitative statements.  It's one
thing to say "5" and quite another to say "5 pounds of force" or "5 pounds
of mass".  In the latter case, while we may take comfort from the
conciseness of "5", we're now also up against the conceptual darkness of
"force" and "matter".  The number, however exact, can illuminate the
material world only to the degree we know what we mean by "force" and
"matter" -- terms that have vexed every scientist who ever dared to think
about them.  What physicist Richard Feynman said about energy is true of
many other fundamental scientific concepts as well:  "we have no knowledge
of what energy is" (Feynman, Leighton, and Sands 1963, p. 4-2).

To ignore the darkness in key terms of our science -- to claim that
mathematics gives us the essence of things when we can't even say what the
things are and we have no non-mathematical language adequate to them -- is
to be no less in the grip of nonsense than were those medieval thinkers
who were content to explain the character of gold by appealing to an
occult quality of "goldness".  Mathematics, as a self-contained, "essence-
yielding" discipline, offers a pseudo-explanation no more helpful than the
occult quality.  It does no more good for us merely to assume we know what
the mathematics is being applied to than it did for our predecessors to
assume they knew what goldness was.

An elementary point, you might think.  But the scientist and engineer have
shown a powerful tendency to conceive their desired mechanisms through an
ever more disciplined focus upon mathematics, algorithm, and software,
blithely inattentive to the character of the world the experimental
apparatus is coercing into algorithmic "obedience".  It is true (and a
remarkable accomplishment) that, unlike our medieval forerunners, we have
perfected a method for obtaining workable devices.  In fact, many such as
Daniel Dennett ("biology is engineering") would more or less collapse our
entire project of understanding into a single-minded pursuit of workable
devices.  But in doing so they increasingly attend, in their
anthropocentric way, only to the sort of clean, mathematical structure
they temporarily manage to impose upon these devices at a high level of
abstraction.  That is, they are interested in seeing only machines, and in
seeing machines only as manageable abstractions.

Accordingly, the device itself, as physical phenomenon, recedes from view
while the immaterial logic we have associated with it consumes our
attention.  In our quest for understanding we have become obsessed with
the equivalent of angelic hosts bearing data in a timeless algorithmic
dance, and whether the dance proceeds on the head of a pin or along
silicon pathways or within the deeply worn logical grooves of our own
minds hardly matters.  The dancers are, from our preferred point of view,
pure and chaste, insubstantial, uncontaminated by gross matter.  They give
us a kind of otherworldly "physics", as Dennett claimed, free of noise and

The only way for us to break the hypnotic spell of our own abstract
cerebrations is to open our senses again so as to re-experience the world
we have been fleeing, much as our ancestors of several centuries ago broke
the medieval spell and looked out at a new world.  But just as it took
those earlier pioneers centuries to understand what breaking free really
meant -- medieval thought habits persisted even in Newton -- so, too, it
may require a long time for us to escape the trance of mechanistic thought
and begin to recognize the living qualities of the substances we have
learned to ignore behind a veil of precisely behaved abstractions.

Looking Ahead

It is time to pause and ask where these prefatory remarks might lead us in
examining mechanistic science and its technological foundations.  I offer
here a bare statement of several theses as a kind of prospectus for future
articles in this series.

First, as we have begun to see, the resort to mathematical formalism,
whether it is a formalism of equations, rules, logic, or algorithms, is
inadequate even to explain a machine.  The explanatory logic -- conceived
by us and imposed upon the machine -- relates to our purposes and
operations, and, as a genuine lawfulness, remains in a sense external to
the actual substance of the machine.  If our science is a science of such
formalisms governing a world-machine, then it cannot give us any full
understanding of this world-machine.

Second, nothing in the natural world -- including inanimate nature -- is
machinelike if by "machine" we refer to the human artifacts we usually
call by this name.  The governing idea of a machine is imposed upon it by
a designer through a proper arrangement of parts; the idea is not
intrinsic to the parts, not demanded by them, not the necessary expression
of their existence.

Nature, on the other hand, has no designer -- at least not one of this
external sort.  We cannot think of laws on one side and a substance
obeying or "instantiating" the laws on the other.  Rather, the laws belong
to the substance itself as an expression of its essential character.  The
lawfulness of a machine is in part a cultural artifact; the lawfulness of
the physical world is through and through the intrinsic expression of its
own being.  And we can understand this being only to the degree we
penetrate and illuminate the more or less opaque terms of our science --
"force", "mass", "energy", and all the rest.

Third, the immanent mathematical lawfulness we do discover in the natural
world is never the law or the explanation of whatever transpires in the
world.  It is merely an implicit aspect of the substantive (but largely
ignored) phenomenal reality that must be there in order for there to be
something, some worldly process, that exhibits the given mathematical
character.  The relation between the mathematical character and the
reality in which it is found is the relation between syntax and
semantics.  We see this same relation between the formal grammar and the
meaning of our speech.  The grammar is an implicit lawfulness.  But, given
the grammatical structure alone, we cannot know the meaningful content of
the speech.  The structure is abstracted from this content, leaving
behind much of what matters most.

This, I believe, is a crucial point, deserving a great deal of elaboration
(which I will in the near future try to provide).

Fourth (and finally), many readers will by now be yelling at me in their
minds:  "You fool!  Pay attention to different levels of description!"
What they are getting at is something like this:

   Of course rules such as those defining the Game of Life are
   inadequate to provide a complete explanation of the machine executing
   them.  The rules describe the machine only at a high level of
   abstraction.  But we can also provide descriptions of the same sort at
   progressively lower levels of abstraction, until finally we have
   described the fundamental particles constituting the machine.  All
   these descriptions together tell us everything we could possibly know
   about the machine, and also about the world.

But this appeal to descriptive levels fails utterly to bridge the gap
between mechanistic explanations and an adequate explication of the
world's lawfulness.  The problem is that the shortcomings of the
mechanistic style of explanation follow you all the way down.  If, when
you finally arrive at the particles, you try to describe them as if they
were little machines -- if, that is, you remain faithful to your
mechanistic convictions -- then your rules, algorithms, mathematics, and
logic explain no more about these particles than the Game of Life does
about the concrete machine it is running on.  But now you are at the
supposedly fundamental level of understanding, so the limitations here
apply to the entire edifice you have erected upon this foundation.

If you want a true understanding of the world's order, the crucial gap you
have to leap is not the gap between levels of description.  Rather, it is
the distance between unembodied mathematical, logical, and algorithmic
formalisms on the one hand, and the full content of the world these
formalisms are abstracted from on the other.  Highlighting this gap will
be one of my primary aims in forthcoming essays.

Summarizing, then:

** Mechanistic explanations do not even explain machines.

** The world is not in any case a machine.

** A mathematical regularity, or syntax, is implicit in the world's
   phenomena and can be said to explain the world no less and no
   more than the grammatical syntax of a speech explains the content of
   the speech.

** If, when appealing to a hierarchy of descriptive levels, we remain
   committed to mechanistic explanation, then the limitations of such
   explanation afflict us all the way down the hierarchy.

Those of you acquainted with the philosophy of science will recognize in
these statements implications of the most radical sort, even though so far
I have done little more than offer brief justification for the first
thesis of the four.  In the end we will find ourselves confronting, among
other things, an entirely new (or, rather, very old and forgotten) style
of explanation based on form.

We will also see the necessity for reversing the far-reaching decision
within science to ignore qualities.  This decision, if not reversed, must
lead ultimately to the disappearance of the world, which is not there
apart from its qualities -- and therefore it will lead also to the
annihilation of the science that began with such promise as a resolve to
reject levitated abstraction and observe the world.

Related articles:

"Flesh and Machines: The Mere Assertions of Rodney Brooks" in NF #146:

"Are Machines Living Things?" by Kevin Kelly and Stephen L. Talbott in NF #133:

"What Are the Right Questions?" by Kevin Kelly and Stephen L. Talbott in NF

"Disconnect?" by Kevin Kelly and Stephen L. Talbott in NF #139:


Barfield, Owen (1963).  "Introduction" in Rudolf Steiner, Tension
Between East and West (Hudson, NY: Anthroposophical Press, 1963), pp.

Cochrane, Peter (undated).

Dennett, Daniel C. (1995). Darwin's Dangerous Idea: Evolution and the
Meanings of Life. New York: Simon & Schuster.

Feynman, Richard P., Robert B. Leighton, and Matthew Sands (1963).  The
Feynman Lectures on Physics vol. 1.  Reading MA: Addison-Wesley.

Polanyi, Michael (1962).  Personal Knowledge: Towards a Post-Critical
Philosophy.  Chicago: University of Chicago Press.

Rubin, Charles T. (2003).  "Artificial Intelligence and Human Nature",
The New Atlantis vol. 1, no. 1 (summer), pp. 88-100.

Waldrop, M. Mitchell (1992).  Complexity: The Emerging Science at the
Edge of Order and Chaos.  NY: Simon & Schuster.

Goto table of contents


                       ANNOUNCEMENTS AND RESOURCES


A second booklet is now available in the Nature Institute Perspectives
series.  Written by my colleague, Craig Holdrege, it is called The Flexible
Giant: Seeing the Elephant Whole.  This is another in his remarkable
series of "whole-organism studies".  You can obtain the attractive,
65-page booklet by sending $10 (which includes postage and handling) to
The Nature Institute, 20 May Hill Road, Ghent NY 12075.  Or send us your
credit card information by mail or fax (518-672-4270).  Or transmit your
payment through PayPal:

Holdrege's study is directly relevant to much I have written in NetFuture,
for this reason:  it represents one researcher's approach to a qualitative
science, and the need for such a science (as an alternative to mechanistic
explanations) is what my own writing is all about.

The first book in the Perspective series, entitled Extraordinary Lives:
Disability and Destiny in a Technological Age (drawn from NetFuture),
continues to be available as well, for the same price.


Goto table of contents


                          ABOUT THIS NEWSLETTER

Copyright 2003 by The Nature Institute.  You may redistribute this
newsletter for noncommercial purposes.  You may also redistribute
individual articles in their entirety, provided the NetFuture url and this
paragraph are attached.

NetFuture is supported by freely given reader contributions, and could not
survive without them.  For details and special offers, see .

Current and past issues of NetFuture are available on the Web:

To subscribe or unsubscribe to NetFuture:

Steve Talbott :: NetFuture #151 :: October 30, 2003

Goto table of contents

This issue of NetFuture: