Goto NETFUTURE main page
NETFUTURE
Technology and Human Responsibility
--------------------------------------------------------------------------
Issue #87 A Publication of The Nature Institute March 30, 1999
--------------------------------------------------------------------------
Editor: Stephen L. Talbott (stevet@netfuture.org)
On the Web: http://netfuture.org
You may redistribute this newsletter for noncommercial purposes.
NETFUTURE is a reader-supported publication.
CONTENTS
---------
Editor's Note
Quotes and Provocations
I Think I'll Take Just One More Computer
Virtuality and the Atomization of Experience
Will Media Lab Chefs Some Day Become Intelligent?
When Faith in Computers is Boundless
Open Net, Padlocked Libraries
The High Stakes of Standardized Testing (Edward Miller)
Even sound test results can be mis-used
DEPARTMENTS
Correspondence
Shovel This, Microsoft (Michael Gorman)
About this newsletter
==========================================================================
EDITOR'S NOTE
The piece I wrote in NF #78, "Who's Killing Higher Education?" has been
reprinted (with a new concluding section) in the March/April issue of
Educom Review. You might be interested to know that the material you read
here is increasingly finding its way into the "mainstream press". For
example, Educom Review is also preparing to reprint Lowell Monke's essay
on the Net and multiculturalism (NF #49); my piece, "Why is the Moon
Getting Farther Away?" (NF #70) appeared in The Internet and Higher
Education, as well as Orion; and Wired has asked permission to
reprint the Marcelo Rinesi - Muktha Jost exchange (NF #85).
Of course, NETFUTURE also gets widely circulated to various online forums,
email lists, and private distributions. This, in fact, is a practical
(non-monetary!) way you can help. To what degree NETFUTURE will fulfill
its potential depends a great deal on the initiative readers take in
bringing it to the attention of those who should know about it. I could
attempt this myself only through the kind of general advertisement I find
repugnant. (See "Cluttering Our Lives for Profit" in NF #86.) The key is
word of mouth, via a far-ranging network of people who care, penetrating
into many corners of society.
Speaking of circulation, Ed Miller's article on standardized testing in
this issue deserves the widest possible distribution. The National
Research Council study he describes has been under-reported, perhaps
because of its revolutionary, common-sense conclusions. (Yes, common
sense tends to be revolutionary whenever you're talking about education
today.)
SLT
Goto table of contents
==========================================================================
QUOTES AND PROVOCATIONS
I Think I'll Take Just One More Computer
----------------------------------------
When Bill Gates was interviewed on the NPR Marketplace program a few days
ago, the interviewer asked whether the computer had really proved useful
in business. Gates volunteered that you could find the answer by asking
computer users, "How would you feel if we took your computer away from
you?"
Of course, that's the kind of question one rightly puts to those who won't
acknowledge an addiction. But somehow I don't think Gates was proposing
to establish a twelve-step recovery program for the computationally
afflicted. It's true that he has been upping his charitable giving
lately. But, then, some people's view of charity is to distribute free
wine samples at an Alcoholics Anonymous meeting.
Virtuality and the Atomization of Experience
--------------------------------------------
The technologist's dream of virtual reality is straightforward in the way
that only technological dreams can be: reproduce all the "sensory inputs"
properly associated with the desired virtual experience, and you will have
created a virtual reality wholly indistinguishable from the corresponding
"real reality".
This vision is startlingly naive in its artificial reduction of the human
being to a set of isolated sensory mechanisms. University of Montana
philosopher Albert Borgmann makes this point beautifully in his book,
Crossing the Postmodern Divide (University of Chicago, 1992). He
asks us to imagine a professional woman who,
after a most stressful morning, is running in her favorite winter
landscape. New snow is sparkling in the sun, yet the footing is
perfect. Snow geese are vigorously rising from the river. Then it is
quiet but for the scolding of the Steller's jays. A snowshoe hare up
ahead is hopping along the trail. There, suddenly, is a crashing in
the brush, a gigantic leaping and pouncing; a mountain lion has taken
the hare and is loping back up the slope. Quiet once more settles on
the valley. A herd of elk is browsing in the distance. The trail is
rising. The runner is extending herself; she reaches the crest of the
incline; another quarter mile and the trailhead comes into view. (p.
94)
Borgmann then asks: Does it matter whether this activity was real or
hyperreal (as he calls the fully realized ideal of virtuality)? He
answers by the simple device of carrying the scenario one step further.
The woman comes to the end of her run, walks to her car parked near the
trailhead, and drives back through the snowy valley to her office:
She is elated. People spend years in the mountains without ever seeing
a lion. To see one at the height of a hunt is a rare blessing. And
she feels blessed also to live in a region wide and wild enough to
support mountain lions, and on a continent hospitable enough for geese
to nest in the North and winter in the South. She revels in the
severity of the early winter that has driven the snow geese south from
Canada and the elk down from the high country. The snow must already
be ten feet deep on the peaks and ridges. There will likely be a heavy
runoff in the spring and strong river flows throughout the summer.
This is where she wants to be.
This contrasts with an entirely different conclusion:
The vista is dimming, the running surface is slowing down, the ceiling
lights are coming on. She goes to the locker room, showers, changes,
and steps into a muggy, hazy afternoon in the high-rise canyon of a big
city. All that was true of the real sun would now be false. The
hyperreal run would have revealed nothing about her surroundings, would
have bestowed no blessings on her, and would not have been an occasion
for her to affirm her world.
What the naive notion of virtual reality leaves out is context. That is,
what it leaves out is just about everything -- certainly just about
everything that gives an experience its enduring meaning, everything that
makes it possible for us to weave a connected whole out of our lives. As
Borgmann notes, it is the desire to create a readily transferable,
disposable experience that requires the experience to be extracted from
its context. Reality has the unfortunate tendency to keep relating one
thing to another, and those relations must be broken if you want a nice,
reliable, commoditized experience.
This helps us to see that virtual reality is in one sense just the
perfected extreme of a tendency toward decontextualization evident
throughout modern life. As an item in INNOVATION (June 29, 1998) put it:
In the late agrarian economy, mothers mixed birthday cakes from basic
ingredients; in the good-based industrial economy, they made them from
Betty Crocker pre-mixed ingredients; and when the service economy took
hold, parents ordered cakes from the bakery store. Now, busy parents
buy neither the ingredients nor the cake: they buy the experience
itself, at places like Chuck E. Cheese, the Discovery Zone, or the
Mining Company, which throw the whole party as a memorable event for
kids.
That's all correct except for the "memorable" part. Something memorable
may certainly happen at the party -- the birthday girl may, for example,
break her arm. But the party itself as an integrated part of her life
will not likely be memorable -- not, say, in the way that a carefully
prepared, home-brewed event might have been. Most of the connections
between the party and the rest of her life have been severed. She will
walk away from the recreation center in much the same way as the young
woman walked away from the virtual spa. While her interactions with her
playmates had more elements of reality than the simulated hiking
experiences, the entire affair took place as if on an island in the middle
of nowhere (and getting to it, of course, involved little more than a
quick, hermetically sealed passage within that pre-eminent vehicle of
decontextualization, the automobile).
As is true of so many aspects of the computer, its virtuality and powers
of decontextualization are merely the perfection of tendencies we were
already assiduously cultivating. That's one reason why sound critical
assessment of digital technologies is so difficult: in many respects we
are the computer, and it is therefore difficult to gain the
distance required for valid assessment of the role of computing in our
lives.
(For notes on Borgmann's important work, Technology and the Character
of Contemporary Life, see NF #64.)
Will Media Lab Chefs Some Day Become Intelligent?
-------------------------------------------------
I sometimes wonder whether the folks at the M.I.T. Media Lab are pulling
our legs. Are they stand-up comedians in disguise?
It seems that a lot of energy at the prestigious lab (which claims to be
"inventing the future") is going into the redesign of the American
kitchen. For example, one project involves training a glass counter top
to assemble the ingredients for making fudge by reading electronic tags
on jars of mini-marshmallows and chocolate chips, then coordinating
their quantities with a recipe on a computer and directing a microwave
oven to cook it.
Dr. Andrew Lippman, associate director of the Media Lab, says that "my
dream tablecloth would actually move the things on the table. You throw
the silver down on it, and it sets the table."
One waits in vain for the punch line. These people actually seem to be
serious. And the millions of dollars they consume look all too much like
serious money. Then there are the corporate sponsors, falling all over
themselves to throw yet more money at these projects.
Nowadays this kind of adolescent silliness is commonly given the halo of a
rationale that has become respected dogma. After all, don't many
inventions find unexpected uses in fields far removed from their first
application, and doesn't a spirit of play often give rise to productive
insight?
Certainly. But somehow it doesn't all add up.
** In the first place, the possible occurrence of serendipitous benefits
is not a convincing justification for trivializing the immediate
application of millions of research dollars.
** In the second place, the Media Lab researchers voice their comic lines
with a strange seriousness and fervor, devoid of the detachment underlying
a true spirit of play. Michael Hawley, an associate professor of media
technology at M.I.T., laments that the kitchen is
where you have the most complex human interactions and the most
convoluted schedule management and probably the least use of new
technologies to help you manage it all.
And of this degrading backwardness Lippman adds:
Right now, your toaster doesn't talk to your television set, and your
refrigerator doesn't talk to your stove, and none of them talk to the
store and tell you to get milk on your way home. It's an obvious place
screaming out for connectivity.
Those sponsors must love it. Where else but in an academic computing
laboratory could they possibly find adult human beings seriously willing
to propose such laughable things in order to start creating an artificial
need where none was recognized before? By slow degrees the laughable
becomes conventional.
Which explains why those corporate sponsors don't appear to be just
waiting around for the occasional, serendipitous "hit". Clearly, they see
the entire trivial exercise as itself somehow integral to their own
success. I don't doubt their judgment in this at all.
** Thirdly, there are signs of a pathological flight from reality in all
this. Hawley tells us that
in time, kitchens and bathrooms will monitor the food we eat so closely
that health care will disappear. We will move from a world in which
the doctor gets a pinprick of data every blue moon to the world in
which the body is online.
"Health care will disappear." If his words are meant to be taken even
half seriously, this is a man with severely impaired judgment and with the
most tenuous connection to reality. One wonders how many of these kitchen
technicians have ever done some serious gardening, and how many of them
can even grasp the possibility that preparing food might be an important
and satisfying form of work -- at least as satisfying as interacting with
the digital equipment they would inflict on the rest of us (and, for that
matter, a lot more healthy).
No, the kind of fluff the Media Lab all too often advertises is not really
comic. Looked at in its social context, it is sick and obscene. It is
sick because of the amount of money spent on superficialities; it is sick
because of the way corporate sponsors have been able to buy themselves an
"academic" facility at a major educational institution to act as their
"Consumer Preparation Department"; and it is sick because a straight-faced
press corps slavishly reports this "invention of the future" without ever
administering the derisive smile so much of this stuff begs for.
The above quotes, by the way, come from the New York Times (Feb.
18, 1999). William L. Hamilton, the author of the article, does at least
quietly give notice that Hawley is "a bachelor who rarely uses his
kitchen". Hardly surprising. The man's passion has a lot more to do with
computing for its own sake than with entering into the meaning and
significance of the food preparer's task.
When Faith in Computers is Boundless
------------------------------------
"From an article here and a TV program there", writes Paul De Palma in the
Winter, 1999 American Scholar, "from a thousand conversations on
commuter trains and over lunch and dinner, from the desperate scrambling
of local politicians after software companies, the notion that prosperity
follows computing, like the rain that was once thought to follow the
settler's plow, has become a fully formed mythology."
An associate professor of mathematics and computer science at Gonzaga
University, De Palma takes a few not-very-gentle pokes at the mythology.
Among his conclusions:
[The computer skill taught in schools and universities] is at best
trivial and does not require faculty with advanced degrees in computer
science.
As the number of microcomputers in our schools has grown, the chance
that something interesting might be done with them has decreased.
The stunning complexity of microcomputer hardware and software has had
the disastrous effect of transforming every English professor, every
secretary, every engineer, every manager into a computer systems
technician.
For all the public subsidies involved in the computer literacy
movement, the evidence that microcomputers have made good on their
central promise -- increased productivity -- is, at the very least,
open to question.
De Palma is willing to say the obvious. For example: "To write a report
on a machine with a Pentium II processor, sixty-four megabytes of memory,
and an eight-gigabyte hard disk is like leasing the space shuttle to fly
from New York to Boston to catch a Celtics game." And again:
The situation is really quite extraordinary. Schools and colleges
across the country are offering academic credit to students who master
the basics of sophisticated consumer products. Granted that it is more
difficult to master Microsoft Office than it is to learn to use a VCR
or a toaster oven, the difference is one of degree, not of kind.
The obvious question is why the computer industry itself does not train
its customers. The answer is that it doesn't have to. Schools, at
great public expense, provide this service to the computer industry
free of charge. Not only do the educational institutions provide the
trainers and the setting for the training, they actually purchase the
products on which the students are to be trained from the corporations
that are the primary beneficiaries of the training.
(Thanks to Michael Corriveau for passing De Palma's article along.)
Open Net, Padlocked Libraries
-----------------------------
NETFUTURE reader and educator, Jamie McKenzie, not known as a technology
refuser, has sounded an alarm about our "ill-considered affair with
networked information". In his online article, "A Brave New World of
Padlocked Libraries and Unstaffed Schools", he worries that
the story of declining funding and the padlocking of libraries goes
unmentioned by most of the "legitimate press" as stories of Internet
stocks and futures dominate their pages and screens.
The article is mostly a collection of reports McKenzie has gathered from
educators in the trenches. These reports support the notion -- certainly
familiar to NETFUTURE readers -- that
in some places, the pressures to network schools are so intense that
priorities are severely skewed in order to find the funding for the
equipment. The hardware effort drains resources away from essential
school programs and often leaves the school or district without the
funding to provide a robust professional development program or
sufficient technical support. Networks arrive with enormous appetites
for dollars and staff time. Feeding the "network beast" becomes a
preoccupation. (http://www.fromnowon.org/feb99/padlocked.html)
I have the vague impression that the occasional skeptical voice such as
McKenzie's is more discernible within the general technological fervor of
the mainstream press than was the case a couple of years ago. Just
recently the New York Times ran an article in its education section
under the title, "Amid Clamor for Computer in Every Classroom, Some
Dissenting Voices" (Mar. 17), and Pamela Mendels regularly gives play to
such voices in the online version of the Times.
I wonder, though, whether, as a society, we will ever wake up from the
strange collective trance whereby we sleepwalked our way into a hugely
expensive computerization of education without ever having thought to ask
what educational goal we were aiming for -- let alone whether
computerization would serve that goal.
An article here and there notwithstanding, I don't see many signs of the
waking up. The scary thing is that the computers we have so automatically
yielded to are the perfect instruments for training us toward the kind of
sleepwalking state that makes further yielding more likely -- so much so
that few people today even recognize any longer how unhumanlike is the
one-sidedly algorithmic nature of the computer's re-shaping of our
activities. The logic of algorithms can indeed flow automatically, and we
all too easily move with that logic, for it is usually the path of least
resistance. Might we be locking ourselves into a downward spiral from
which escape will be ever more difficult?
(Thanks to Nelson Logan for bringing McKenzie's article to my attention.)
SLT
Goto table of contents
==========================================================================
THE HIGH STAKES OF STANDARDIZED TESTING
Edward Miller
(edmiller@ziplink.net)
Editor's note: A culture that reveres information conceived as a
collection of shovelable, database-file-able, atomic facts is bound to
construe a student's test score as corresponding to some fixed, well-
defined content in the student, which in turn is supposed to reflect the
student's capacities. But if you look at test scores in context -- and
the recovery of context in the face of technology's radical tendency
toward decontextualization is one of NETFUTURE's enduring themes -- the
picture changes drastically. Even if you assume that a test score
measures a particular content reliably (usually a doubtful assumption),
huge questions remain. For example,
** Looking backward: does the score represent the capacities of the
student or the incapacities of his teachers?
** Looking forward: if you make decisions about the student's future
based on the test score, will these decisions help or harm the student?
(And, after all, why do we administer tests if not to aid in making wiser
decisions?)
This is the kind of context in which the National Research Council tried
to assess high-stakes testing. NETFUTURE reader Ed Miller, formerly
editor of the Harvard Education Letter, was a consultant to the study
panel. Here he summarizes some of the panel's findings.
---------------------
I recently participated in a study, conducted by the National Research
Council, of the appropriate uses of standardized tests for making
decisions about individual students. Its findings may be of interest to
NETFUTURE readers who are concerned about the ways in which the technology
of testing has become one of the most powerful influences in our education
system.
The study committee was charged by Congress with examining the use of test
scores for so-called high-stakes purposes, defined as making decisions
about tracking, promotion, and graduation. Such uses are proliferating
all over the country, and are widely considered an effective tool for
whipping the public schools into shape. For example, students in Chicago
must now get at least a certain score on the Iowa Test of Basic Skills to
be promoted to the next grade. Starting next year, high school students
in New York will have to pass the state Regents exam (formerly optional)
to get a diploma. The committee found that, while testing can and often
does yield valuable information about students' achievement, the nature
and limitations of that information are widely misunderstood. Test
results, the study concluded, are often used improperly. In the case of
high-stakes tests, the effects on individual students' lives may be
disastrous.
The committee adopted three basic criteria for determining whether a
particular test use is appropriate:
** Measurement validity -- whether a test is valid for a particular
purpose, and whether it accurately measures the test taker's knowledge.
** Attribution of cause -- whether a student's performance on a test
reflects knowledge and skill based on appropriate instruction or is
attributable to poor instruction or to such factors as language
barriers or disabilities unrelated to the skills being tested.
** Effectiveness of treatment -- whether test scores lead to placements
and other consequences that are educationally beneficial.
These criteria, which were derived from the established standards of the
testing profession, reflect a fundamental truth about tests that is well
known by experts but generally obscured in public policy debates and news
reports: test scores are subject to all kinds of statistical and human
error and are therefore very often wrong. Moreover, there is a remarkable
lack of agreement in many cases about whether a particular test even
measures what it is supposed to measure. But because educational test
results are given in numerical form they create a powerful impression of
scientific precision -- that they are like a thermometer or your blood
pressure reading. They are not. They provide only one perspective -- and
often a very narrow and clouded one -- on a student's actual knowledge.
This appearance of precision in test scores has been used in many
instances to rationalize discriminatory and unfair practices.
The nature of standardized testing, and its history of misuses, leads
inexorably to certain conclusions. One is that any use of a test score to
justify an educational decision that is likely to harm rather than help
the child is, by definition, insupportable. With regard to tracking and
promotion, this logic led the study committee to some surprising findings.
After thoroughly examining the research literature on tracking, the group
concluded that "students assigned to low-track classes are worse off than
they would be in other placements. This form of tracking should be
eliminated. Neither test scores nor other information should be used to
place students in such classes."
The committee was similarly troubled by the evidence on "retention" -- the
practice of making kids repeat a grade. In spite of the popularity of
President Clinton's call to "end social promotion," the committee found
that "grade retention is pervasive in American schools" and that it is
usually not educationally beneficial, but leads to lower achievement and
higher risk of dropping out. It called for early identification of and
remedial programs for students in difficulty as an alternative to holding
them back, and it condemned the growing practice of using the results of a
single test to determine whether a child should go on to the next grade.
Indeed, the committee concluded that high-stakes decisions of any kind
"should not automatically be made on the basis of a single test score."
Other important conclusions were that the use of high-stakes tests to
"lead" curricular reform -- that is, to get schools to change what and how
they teach -- tends to corrupt and invalidate the tests, and is
fundamentally unfair to students; that large-scale standardized tests
should not be used at all in making high-stakes decisions about students
below grade three; and that the existing mechanisms for enforcing
standards of appropriate test use are inadequate.
The implications of these findings are sobering in light of the growing
enthusiasm for more testing as the answer to the intractable problems of
school reform in the U.S. The parallels to our leaders' faith in computer
technology as educational panacea are unmistakable.
The full report of the National Research Council has been published as
"High Stakes: Testing for Tracking, Promotion, and Graduation" by the
National Academy Press. A short version, and information about ordering
the book, can be found at http://www.nap.edu/readingroom/books/highstakes.
Goto table of contents
==========================================================================
CORRESPONDENCE
Shovel This, Microsoft
----------------------
From: Michael Gorman (michaelg@csufresno.edu)
Dear Mr. Talbott
I read your article "Who's killing higher education" [from NF #78, as
reprinted in Educom Review] with interest. Lewis Perelman's
"kanbrain" (ugh!) sounds remarkably like a library though, of course,
without the fixity and authority of print or the organizational
architecture of a library. The idea of storing information and recorded
knowledge until you need it is hardly novel.
I was also interested in your observation that "information" cannot be
defined even by those who use the word constantly. In our book "Future
libraries" Walt Crawford and I, using Mortimer Adler's "ladder of
learning," go to some lengths to distinguish between "information" and
"knowledge" -- the latter being not only far less amenable to electronic
dissemination and use but also a much higher "good of the mind" (to use
Adler's phrase).
If higher education is to be "shoveling information," MicroSoft, AT&T, and
Cisco are welcome to it.
Sincerely,
Michael Gorman
Dean of Library Services
California State University
Goto table of contents
==========================================================================
ABOUT THIS NEWSLETTER
Copyright 1999 by The Nature Institute. You may redistribute this
newsletter for noncommercial purposes. You may also redistribute
individual articles in their entirety, provided the NetFuture url and this
paragraph are attached.
NetFuture is supported by freely given reader contributions, and could not
survive without them. For details and special offers, see
http://netfuture.org/support.html .
Current and past issues of NetFuture are available on the Web:
http://netfuture.org/
To subscribe or unsubscribe to NetFuture:
http://netfuture.org/subscribe.html.
Steve Talbott :: NetFuture #87 :: March 30, 1999
Goto table of contents
Goto NETFUTURE main page