New York Times on the Web Forums Science
Russian military leaders have expressed concern about US plans
for a national missile defense system. Will defense technology be
limited by possibilities for a strategic imbalance? Is this just SDI
all over again?
(6132 previous messages)
rshowalter
- 12:19pm Jun 27, 2001 EST (#6133
of 6168) Robert Showalter
showalte@macc.wisc.edu
In MD6132 rshowalter
6/27/01 10:45am ... there's this:
" The main stumper has happened because the AI
community funded by the military came up against a mathematical
constraint that it was clearly warned against, ignored the warning
-- and has been trying to make big progress along a line of work
where big progress is provably impossible -- for a decade.
" If checking had been morally forcing
within that community -- the US would have better weapons today --
and a less frustrated and corrupted cadre of classified
researchers, as well.
The artificial intelligence efforts funded by the military have
been dominated by a "connectionist" or "parallel distributed
processing" paradigm that was showing severe limits by the late
1980's -- when really serious efforts to make hugely parallel chips
were built by the military, to try to get connectionist systems fast
enough for really good missile guidance. Funding in the open
literature continues to be dominated by these connectionist models -
indicating the classified people are still working with them. The
problem with these networks, for almost all the practical cases --
is that the get slower (in number of cycles per calculation) VERY
fast as the complexity of the networks increases. So that a 10 times
bigger network can easily be billions of times slower (in
computation cycles). This is something like a "brick wall" -- where
very large increases in computation power yeild only small increases
in performance.
rshowalter
- 12:20pm Jun 27, 2001 EST (#6134
of 6168) Robert Showalter
showalte@macc.wisc.edu
In 1990, J.S. Judd , a young pre-tenure acacemic, wrote
NEURAL NETWORK DESIGN AND THE COMPLEXITY OF LEARNING MIT
Press, 1990.
Here is Judd:
“ . . . . “The published successes in
connectionist learning have been empirical results for very small
networks, typically much less than 100 nodes. To fully exploit the
expressive power of networks, they need to be scaled up to much
bigger sizes, but it is widely acknowledged that as the networks
get larger and deeper, the amount of time required for them to
load the training data grows prohibitively. . . . "
Judd means something compelling when he uses the word
"prohibitive." He shows, by mathematically accepted standards, that
even "simple" rote learning, at the scales animals do it, is
impossibly slow (the technical term is NP complete, which is taken
as the standard demonstration of intractability in the computer and
crypto sciences. )
No one called the result wrong, but people found ways to ignore
it.
The response of the neural modeling community, with funding and
work patterns dominated by the military, was to ignore the result,
and marginalize Judd.
Judd was denied tenure, after writing what I believe was an
outstanding book.
Events since 1990 have tended to show that Judd was right.
Progress in "connectionist" neural modeling has been, if anything,
slower than Judd's results might have predicted.
For the reasons Judd was clear about in 1990, progress in the
artificial intelligence that the military cares so much about has
been VERY slow in the last decade.
There are plenty of examples where "the digital revolution" makes
enormous progress possible. Convenience of calibration, for systems
where the physics is fundamentally stable within the calibration
range, is an example. There's no reason to dispute that - it should
be celebrated.
But, for mathematical reasons, there are many cases where an
explosion of computational power gets much less than one
might expect. There are a number of such examples that one would
expect to occur, judging from what is known in the open literature,
in the control problems of missile defense.
rshowalter
- 12:25pm Jun 27, 2001 EST (#6135
of 6168) Robert Showalter
showalte@macc.wisc.edu
On war -- if it is all right to kill anyone associated with a
name such as "communist" -- then one can justify anything at all.
At a fundamental level, much of the mass death in Vietnam caused
by American military action does not look any better, morally, than
much of the mass death produced by the Nazis.
If you think otherwise, you can pretty quickly get to stances
that "make Machiavelli seem like one of the Sisters of Mercy."
Nazi war criminals often argued that their pattern of killing was
better, not worse, than bombing -- and it is hard, from my distance,
to see exactly what is wrong with those arguments.
In Korea, just to take an example, American fire and dam bombing
killed two million people, mostly civilian. Was this somehow purer
than what the Nazis did?
rshowalter
- 12:38pm Jun 27, 2001 EST (#6136
of 6168) Robert Showalter
showalte@macc.wisc.edu
This approach does not have the same limitations connectionism
has -- in many cases, it can be billions or trillions of times
faster doing jobs control systems need -- I put in on the internet,
and Kline informed people about it, in the early 1990's -- as I'd
been told to do - - and waited to be contacted -- as I'd been told
to do.
It talks of "neurons" when it should use the term "glia" -- but
the math is simple, and I tried to block it out clearly.
The approach is fast in digital form -- but can be much faster if
some of the components are calibrated analog - using technology that
has been mostly available for years.
http://www.wisc.edu/rshowalt/pap2
gisterme
- 12:54pm Jun 27, 2001 EST (#6137
of 6168)
"...GI: Putin link actually started here from London: MD5751
lunarchick 6/22/01 9:20am ..."
Thank YOU then, possumdag. Great link, wherever it came from.
gisterme
- 12:55pm Jun 27, 2001 EST (#6138
of 6168)
...or lunarchick or wherever it came from. :-)
(30
following messages)
New York Times on the Web Forums Science
Missile Defense
|