Technology and Human Responsibility

Issue #159                                                December 7, 2004
                 A Publication of The Nature Institute
           Editor:  Stephen L. Talbott (

                  On the Web:
     You may redistribute this newsletter for noncommercial purposes.

Can we take responsibility for technology, or must we sleepwalk
in submission to its inevitabilities?  NetFuture is a voice for
responsibility.  It depends on the generosity of those who support
its goals.  To make a contribution, click here.


Editor's Note

Quotes and Provocations
   Pianists and Video Game Players

Invisible Tools, Emotionally Supportive Pals, Or ... ? (Stephen L. Talbott)
   On putting machines into their place


About this newsletter


                              EDITOR'S NOTE

A reminder that The Nature Institute's website (new and improved version)
is now up and is being periodically updated:
All back issues of NetFuture are also accessible from the site.


Goto table of contents


                         QUOTES AND PROVOCATIONS

Pianists and Video Game Players

Raj Reddy, a Carnegie Mellon professor, is working on a low-cost,
wireless, all-in-one PC/television/DVD player/videophone for users in
developing countries.  One person who seems to like the idea is University
of California (Berkeley) administrator, Tom Kalil.  "Entertainment is the
killer app[lication]", he says, "and that will smuggle something that is a
lot more sophisticated into the home".

You might well hesitate over the logic of this sentiment.  If the more
worthwhile and sophisticated things have so little apparent value that
they must be smuggled into people's lives under the cover of other stuff,
and if this other stuff is what possesses the killer appeal ... well, we
at least ought to wonder whether this is rather like handing someone a
plateful of chocolates with a stalk of broccoli on the side and then
saying, "Here, eat what you'd like.  (Ha ha.  We sure pulled a fast one on
him, didn't we?)"

Smuggling the good under cover of the questionable or, in some cases, the
downright despicable, has become one of the clichés of high-tech culture.
It is an extremely useful cliché, since there is scarcely any human
activity in which you cannot find (or invent) some redeeming value.  Just
the other day I saw a news item about a video game called "JFK: Reloaded",
which allows players to recreate and participate in the assassination of
President John F. Kennedy.  Faced with criticism, the company producing
the game called it an educational "docugame" that would "stimulate a
younger generation of players to take an interest in this fascinating
episode of American history".  Similarly, I suppose you could describe
the older game where drivers score points by running over pedestrians
as a social policy game stimulating interest in issues of public safety.

But my concern at the moment is with what may be the single most common
illustration of the cliché.  However disgusting the video game, we
almost always hear that "at least it improves hand-eye coordination".  I
have no idea where this culture-wide fixation on hand-eye coordination
comes from.  It is, at the very least, odd, given that any healthy
childhood -- indeed, almost anything a child might naturally want to do
(before his instincts have been deadened by technology) -- will lead
toward proper hand-eye coordination.  And, regarding the child glued to a
video screen, why aren't we also concerned about leg coordination?  Or
about whole-body coordination?

But another issue is, for me, the decisive one.  Physically coordinated
performance becomes admirable in the fullest sense only to the degree it
is caught up within a higher expressive purpose.  Without this purpose, we
have only a descent toward the automatic and reflexive -- in other words,
toward the machine-like.  By means of a higher aim, on the other hand
(think of the physical dexterity and artistic achievement of the gymnast,
dancer, and instrumentalist) the physical skills are ennobled.  They rise
from the merely effective to the beautiful.

People often suggest that the manual skills gained from video games are
not unlike those required by the piano player.  The comparison can be
revealing.  Certainly muscular training and coordination are essential to
the pianist.  Even something rather like automatic and reflexive behavior
is required.  It would be impossible to play if the artist had to direct
the movement of each finger consciously.

However, this, too, is a rather shallow cliché.  The pianist does in fact
direct each movement consciously.  While a kind of lower-level, muscular
memory is very much at work, every motion of the fingers adapts, however
subtly, to the artistic intention of the moment.  This intention may be
quite different from what it was in the last performance, depending on the
setting, the audience, the performer's mood, and so on.  So the level we
like to think of as automatic is continually being disciplined and shaped
from a higher, artistic level. This power of shaping constitutes the real
mastery of the pianist.

The difference between the piano and the shoot-em-up video game is that,
for the most part, the latter trains our reflexes to operate independently
of our higher, more artistic sensibilities.  The aim is merely to maximize
a score or otherwise to win.  Where the pianist is pursuing a sense of a
coherent whole and is trying to produce an esthetically unified
performance, the video game player is simply responding to one damned
thing after another.  Bodily grace and expressive content hardly figure
into the picture as conscious goals -- although I suspect there are few
imaginable activities where the truly superb performer is not required to
develop some aspects of grace.

But perhaps it is enough to ask yourself:  going into an open-heart
operation, would you rather hear that your surgeon is a champion video
game player or an accomplished pianist?  Well, no, I take that back.
Given the mechanistic images that increasingly influence our understanding
of every aspect of the world, including medicine, I can almost guarantee
you that many people would think the video game player likely to be more
competent -- this despite the fact that, as we saw in NF #157, the heart
is itself a musical instrument.

All this, by the way, bears on a science born of technology.  Looking at a
world whose nature is as far removed from mechanism as it could possibly
be -- a world of streams and trees and clouds -- it seems we can do
nothing better than imagine infinitesimal mechanisms behind the scenes
while we ignore the higher, expressive gesturing that gives rise to,
disciplines, and masters whatever else is going on.  We can, of course,
say that in our search for mechanisms we are "being rigorous and
quantitative".  And it's true that a concert-goer adopting such a stance
might become wonderfully precise in measuring the pianist's intervals,
pitches, tempos, and dynamic changes.  But he would miss out badly if he
mistook this disjointed data for the music.

That the world is full of music no one would deny -- no one, that is, who
is not busy philosophizing.  Watch a sunset, sit beside a stream, or
wander through a field and something in you will acknowledge the music
whether you wish it or not.  The mechanistic stance in science grew, not
from an original conviction that nature is not an artist, but rather from
the choice to attend to other things.  The measurable parameters of
nature's performance became the sole concern.  It is not surprising that,
after a few centuries of this single-minded choice, the philosophical
conviction emerged that the music is some sort of human illusion or
invention -- that nature is less an artist than a video game engineer, and
that everything going on amounts to little more than one damned thing
after another, without esthetic unity, without feeling, without meaningful

This conviction, however, is the true illusion, and I doubt whether those
raised from childhood on video games -- however wondrous their hand-eye
coordination -- have anywhere near as good a chance of escaping the
illusion as artists do.

Related article:

"The Heart's Song" in NF #157:

"The Reality of Appearances" in NF #119:


Goto table of contents



                            Stephen L. Talbott

Try juxtaposing these two thoughts:

** Researchers are telling us that, emotionally and intellectually, we
   respond more and more to digital machines as if they were people.  Such
   machines, they say, ought to be designed so as to be emotionally
   supportive.  ("Good morning, John.  You seem a little down today.
   Bummer.")  This is thought to be quite reasonable, since in any case
   machines are becoming ever more human-like in their capabilities.

** The common advice from other human-computer interface experts is that
   we should design computers and applications so that they become
   invisible -- transparent to the various aims and activities we are
   pursuing.  They shouldn't get in our way.  For example, if we are
   writing, we should be able to concentrate on our writing without having
   to divert attention to the separate requirements of the word processor.

The conjunction here is slightly odd.  Treat machines like people, but
make them invisible if possible?  Combining the two ideals wouldn't say
much for our view of people.  It sounds as though we're traveling down two
rather different tracks.  And, in the context of current thinking about
computers, neither of them looks particularly healthy to me.  But perhaps
they can help us to explore the territory, leading us eventually to a
richer and more satisfactory assessment of the human-machine relationship.

We Need to Recognize Our Own Assumptions

Surely there is something right about Ben Schneiderman's advice when, in a
promotional interview for his book, Leonardo's Laptop, he states:

   Everyone needs to be alert to the harmful aspects of technology, so
   that designers produce truly elegant products that facilitate rather
   than disrupt .... Effective technologies often become "invisible"....

Who would prefer disruption to invisibility?  But then, invisibility
itself is also problematic.  As information technologies become ever more
sophisticated reflections of our own intelligence, it seems fair to say
that our thoughts and assumptions get built into them in an increasingly
powerful way.  Their whole purpose, after all, is to embody our own
contrivings.  So we meet ourselves in our machines -- which is already
reason enough for caution!  Do we really want all those strivings and
contrivings -- all those thoughts and assumptions someone has cleverly
etched into the hardware and software we are using -- to remain invisible?
When employing a search engine to sift through news items, should we be
content to remain ignorant about the criteria, commercial or otherwise,
determining the engine's presentation of hits?

A vital necessity for all of us today is to remain conscious of the
assumptions and unseen factors driving our thoughts and activity.  To give
up on this is to give up on ourselves and to hand society over to unruly
hidden drives.  But if we must remain conscious of our own assumptions, it
can hardly be less important to prevent others from surreptitiously
planting their assumptions in us.  Granting (simplistically for the
moment) that we are in some sort of conversation with intelligent
machines, it seems only natural that we would want to keep in view our
conversational partner's contribution to the dialogue.  The alternative
would be for the machine to influence or control us beneath the threshold
of awareness.

Keeping the other person (or thing) in view disallows invisibility as a
general ideal.  In human exchange we certainly hope the other person's
presence will not prove downright disruptive.  But in any worthwhile
friendship neither do we want the friend simply to disappear.  And we can
be sure that, at one point or another, the requirements of friendship
will move us disturbingly out of our path.  I cannot enjoy the meanings
a friend brings into my life without risking the likelihood that some of
these meanings will collide with my own.  If computers are like people, I
can hardly expect, or even want, to escape the unsettling demands they
will impose upon me to rise above myself.  Thankfully, true friends can on
occasion be disrupters of the worst sort.

Complementary Errors

But are computers like people?  I have already suggested that they
embody many of our assumptions, and here I have now been drawing an
analogy between human-computer interactions and person-to-person
friendships.  Does this mean I buy into the first view stated at the
outset -- the view that it is natural for us to respond emotionally and
intellectually to intelligent machines as if they were persons?

Not at all.  If we cannot accept the ideal of machine-invisibility in any
absolute sense, neither can we accept the ideal of machine-personality.
The problem with both ideals, at root, comes from the same source:  a
failure to reckon adequately with the computer as a human expression.  The
two ideals simply err from opposite and complementary sides:  a striving
for invisibility encourages dangerous neglect of the tendentious
expressive content we have vested in the machine; on the other hand,
trying to make the machine itself into a person mistakes the
machine-as-an-expression for the humans who have done the expressing.

I am convinced that rising above these complementary errors would
strikingly transform the discipline of artificial intelligence, not to
mention the entire character of a machine-based society.

I have commented before (NF #148) that the world is full of human
expressions that are, in part, manifestations of intelligence.  The
intelligence is really there, objectively, in our artifacts -- in the
sound waves uttered from our larynxes, in the pages of text we write, in
the structure and operation of a loom, automobile, or computer.  It is
impossible to doubt the objectivity, given that anyone who attends to
these artifacts can to one degree or another decipher the intelligence
that was first spoken into them.  We do this all the time when we read a
book.  That's just the nature of the world through and through:  it is
receptive to, and a bearer of, intelligence.

But (as I have also pointed out), it is nonsense to mistake the artifact
for the artificer, or the intelligence spoken into the world as product
for the speaking as power.  The endemic preoccupation with the question
whether computers are capable of human-like intelligence is one
manifestation of this nonsense.  But if we are willing to step back from
this preoccupation and look at the computer in its full human context,
then we can gain a much more profound appreciation of its intelligence.
At the same time, such a contextual approach can guide us toward a more
balanced view of the human-machine relationship.

The Computer in Context

When, instead of trying vainly to coax signs of life from the computer as
a detached and self-subsistent piece of machinery, we examine it as an
expression of living beings, then immediately our flat, two-dimensional
picture of it becomes vibrant and vital.  We see analysts reconsidering
almost every human activity, asking what is essential about it and
imagining how it might be assisted or even transformed by the elaborate
structuring potential of digital devices.  We see designers and engineers
applying their ingenuity to achieve the most adequate implementation of
the newly conceived tools.  And we see consumers and employees struggling
to use or not use the devices they are handed, weighing how to adapt them
to their own needs, perhaps even sabotaging them in service of higher

All this is, or at least can be, creative activity of the highest sort.
But preserving the creative element depends precisely on our not viewing
the computer as a merely given and independent reality.  For the irony is
that only when viewed as making an independent contribution does it become
an absolutely dead weight, and therefore a wholly negative factor in human
society.  Removed from the context of continual design and redesign, use
and re-imagined use, sabotage and re-invention, it presents us with
nothing but a mechanically fixed and therefore limiting syntax.  To
celebrate the machine in its own right is like celebrating the alphabet,
or the ink on the page, or the grammatical structure of a great literary
text, rather than the human expression they are all caught up in.

It may seem odd to cite the computer's "fixed and limiting syntax", given
the complex and infinitely refined elaboration of logic constituting this
syntax. But that's just the problem.  We find in every domain of life that
an elaborate and successful structuring of the conditions of life is not
only the glorious achievement of past effort, but also the chief obstacle
to future effort.  All life is a continual development, a maturing, an
evolution, an overcoming or transformation of what is given from the past.

Owen Barfield is referring to this problem in connection with the renewal
of the expressive power of language when he writes,

   Thus, it so often comes about that the fame of great poets is posthumous
   only.  They have, as Shelley said, to create the taste by which they are
   appreciated; and by the time they have done so, the choice of words, the
   new meaning and manner of speech which they have brought in must, by
   the nature of things, be itself growing heavier and heavier, hanging
   like a millstone of authority round the neck of free expression.  We
   have but to substitute dogma for literature, and we find the same
   endless antagonism between prophet and priest.  How shall the hard rind
   not hate and detest the unembodied life that is cracking it from within?
   How shall the mother not feel pain?  (Poetic Diction, chapter 10)

And how shall the corporate reformer not despise the stewards of legacy
software!  This problem only becomes greater as the inexorable
drive toward interlocking global standards gains momentum.

The attempt to find a principle of life within the computer as such,
detached from its human context, is damaging precisely because the machine
itself is almost nothing but the hard rind in need of cracking.  The
continual process of living renewal must come from us, and from our
commitment, as designers and users, to transform the rigid syntax we
have received from the "dead hand of the past".  We rightly strive for
flexible software, but there remains a crucial sense in which every
piece of software, once achieved, becomes a dead weight.

A Mechanical or Human System?

I realize that many will at this point want to press the case for software
that learns or adapts or evolves.  That this "learning" is one of the most
strained metaphors of all time is something that must be left for another
article.  At the moment I can only barely mention the essential limitation
of the fixed syntax that governs whatever sort of learning is claimed to
be going on.  There is, in every digital machine, a top, or outer, level
of syntax defining and limiting what this particular machine is.  When
that level changes, we don't have learning or evolving; we have a

This is quite unlike a living process, whose organic nature subverts the
very idea of a top level of design in the mechanistic sense.  Try
identifying discrete levels of design in any organism, and you will
quickly see the futility of it.  All is mutual participation and
interpenetration.  The transformation from within experienced by every
organism is transformation that leaves no level of physical structure or
process wholly exempt.  A computer does not bear its own principle of life
within itself in this sense.

I am not saying that the syntactic flexibility we achieve in our computer
programs is unimportant.  This achievement is an essential part of our
striving to express something living.  But the striving and expressing are
our striving and expressing, and the living result is found in the way
we live with and employ the things we have made.  It is always destructive
to become fixated upon the capabilities of the machine, as if they
themselves could be an ultimate source of personal or social renewal.

The mechanical flexibility we typically aim for is in fact a flexibility
in human-machine interaction.  Remove the machine from this interactive
context, and its inflexibility immediately declares itself.  We can
program all the options and alternative pathways we want into a machine,
but if it is sitting unattended in some warehouse, endlessly spewing
out the prompt, "Please select one of the following options", until its
power runs down, the flexibility we programmed into it will not be very
much in evidence.  What we mostly aim for when we program computers for
flexibility is to give them a syntax that allows us to be as flexible as

Of course, this remark immediately gets the engineer thinking about how
the computer might be redesigned.  It could, for example, respond to
decreasing power by moving around to find an electrical outlet it can plug
into, and it might scan the radio spectrum for network connections -- and
so on without end.  This "without end", in fact, is what so impresses the
engineer who is thinking about the human-like potentials of the computer.

It always amazes me to see how difficult it is for such people to
recognize their own ongoing contribution to the computer's endless
development, and to distinguish this properly from the completed results
of their effort at any particular moment.  For all the sophistication of
the systems analysis going into mechanical systems, we seem unable to
mount any reasonable analysis of the human-machine system, except by
reducing the human being to a mechanical element of the system.

A much more fruitful approach would be to consider the machine within its
human context.  In this way we would elevate the machine, not through the
crazy imputation of emotions and thoughts to it, but rather through the
recognition that our conversation with the machine is, in the end, a
conversation with ourselves -- just as we converse with ourselves (and not
in any primary sense with paper and ink) when we read a text.

If we would truly raise the machine to our level rather than reconceive
ourselves in its terms, then we might be more naturally inclined to
ennoble our conversation with it.  We would do this, for example, by
shaping the computer's outer form with the sensitivity of a sculptor, and
by deriving its frozen, internal logic from an inspired vision of this or
that human activity, just as we can abstract a bare logical structure from
an orator's high and passionate meanings.  And we would recognize that
recovering worthy activity and high purpose from this frozen structure
depended upon our ability to warm it with our own passions, enlighten it
with our own meanings, enliven it with our willful intentions.  And so,
finally, our fascination with the evolution of "spiritual machines" would
be transformed into our own evolving sense of spiritual responsibility for
those aspects of ourselves we invest in our mechanical creations.

Related articles:

"From HAL to Kismet: Your Evolution Dollars at Work" in NF #149:

"Intelligence and Its Artifacts" in NF #148:

"Can Open Standards Suffocate Us?" in NF #82:

"The Future Does Not Compute", chapter 3 in The Future Does Not
Compute: Transcending the Machines in Our Midst:

Goto table of contents


                          ABOUT THIS NEWSLETTER

Copyright 2004 by The Nature Institute.  You may redistribute this
newsletter for noncommercial purposes.  You may also redistribute
individual articles in their entirety, provided the NetFuture url and this
paragraph are attached.

NetFuture is supported by freely given reader contributions, and could not
survive without them.  For details and special offers, see .

Current and past issues of NetFuture are available on the Web:

To subscribe or unsubscribe to NetFuture:

This issue of NetFuture:

Steve Talbott :: NetFuture #159 :: December 7, 2004

Goto table of contents

Goto NetFuture main page