New York Times on the Web Forums Science
Technology has always found its greatest consumer in a nation's
war and defense efforts. Since the last attempts at a "Star Wars"
defense system, has technology changed considerably enough to make
the latest Missile Defense initiatives more successful? Can such an
application of science be successful? Is a militarized space
inevitable, necessary or impossible?
Read Debates, a
new Web-only feature culled from Readers' Opinions, published every
Thursday.
(2612 previous messages)
rshow55
- 12:41pm Jun 18, 2002 EST (#2613
of 2618)
The control of our nuclear weapons systems can easily become
explosively unstable if there is a single sign switch, that is left
uncorrected. Such sign switches are common in human organizations -
and the higher the anxiety level, the more common they are likely to
be. And, so far as I can tell, the process that makes for this sign
switching, though common, is not commonly understood.
But that is, of course, an issue I need to defer dealing with,
until I can become, if not "completely unshackled" -- at least
unshackled enough to explain a few things. It is no fun being
"Cassandra" - - and since I care about right answers, I'm trying to
get into a situation where people will listen.
. . . . .
I have some other difficulties with the controls, as well - a
point I have been trying to make since before my 1 day meeting with
"becq" on September 25, 2000, on this thread.
rshow55
- 12:59pm Jun 18, 2002 EST (#2614
of 2618)
I'm making progress in finding out who I can talk to. MD1786
rshow55
4/26/02 11:19am . . . I think we're facing soluble problems,
if we're just willing to "collect, connect, and correct the
DOTS" . . . and keep doing it until we come to reasonable focus.
MD324 rshow55
3/10/02 1:22pm
rshow55
- 02:39pm Jun 18, 2002 EST (#2615
of 2618)
The question of fraud can't be ruled out, on missile defense, or
many other things. MD1676 rshow55
4/22/02 7:47pm
http://www.tompaine.com/op_ads/opad.cfm/ID/5241
. . . but good faith mistakes can't be ruled out either.
Motivations and patterns are mixed. And sometimes results are, and
look, essentially the same, whatever the motives may be. .
But what happens if checking is forbidden - and this goes on
for long times?
It isn't only that mistakes can happen. Some kinds of mistakes
are statistically likely -- and in complicated enough systems,
essentially certain, after a long enough time.
I'm trying to move carefully. We're dealing with soluble problems
here - and some of the most central problems are simple, and maybe
even well along toward solution now.
. . . .
Question: Suppose you have a system where
exception handling may exist, but involves penalties? Most
exception handling systems are like this. Now, suppose, by
intention or inadvertance, information that "should be" filtered
out by the system gets through once ?
How likely is the exception handling to be the same the next
time? Will there be any exception handling left at all?
It can easily happen that a system built to "pass" one kind of
information, but filter out all other kinds, switches so that it
never passes on the information it is :"built" to convey.
This is especially likely to be true of systems that are basically
"exception handling."
Thirty years ago, the FBI and CIA didn't talk to each other much,
but when talking had to occur, people could, for good reasons that
they could explain, get past filters, so that good communication
could occur. Now, or recently, in exactly the areas where FBI and
CIA need to communicate best, they seem not to be able to
communicate of function rationally at all. Filtering mechanisms that
are exactly wrong have come into being.
When an exception handling "switch" fires, in a system that is
essentially digital, and made in the usual ways, the system has to
be "reset" in order for the exception handling to function properly
again. Unless this is done, the system filter will have a sign
switch - and will act exactly wrong.
Human exception handling is often like this, as well.
rshow55
- 02:44pm Jun 18, 2002 EST (#2616
of 2618)
Military patterns of exception handling are especially likely to
have this problem. And the higher the anxiety of the designers, the
more likely the problem is.
"Safeties" can become "triggers" when this sort of mistake is
made. -- And triggers that are supposed to work, and "tested" to be
reliable - can fail to function at all when they are supposed to.
I have some recent experimental evidence of this sort of thing,
in dealing with a government organization.
I called a good man up on the telephone, and the system worked
exactly as it was supposed to. It is now reset, at least with
respect to me, so that it acts in an exactly opposite way.
rshow55
- 03:28pm Jun 18, 2002 EST (#2617
of 2618)
When I deal with individuals and organizations, I have to be
concerned with "sign switching" - and changing systems, perhaps for
the worse, by interacting with them.
For certain kinds of jobs you must have two people
cooperating - one alone simply cannot do certain things.
There are also certain kinds of jobs that can only be done
with some face to face interaction - under circumstances where
people have some reasonable distrust of each other -- so that they
can check on each other, as people, and check on facts.
Playing Know And Tell By JOHN SCHWARTZ http://www.nytimes.com/2002/06/09/weekinreview/09BOXA.html
. . tells the story of Cassandra, and does so beautifully. It is a
story about people getting something exactly wrong, and not
hearing warnings that they are wrong. Schwartz's piece ends
"Listen."
Once listening occurs, and people hear that a mistake
might have been made, there is still uncertainty. There is no
way to tell whether a mistake has been made - but to check
physically, or by internal consistency tests.
(1
following message)
New York Times on the Web Forums Science
Missile Defense
|