From: Nathan Zook <nzook@bga.com>
To: Nathan Loofbourrow <loofbour@cis.ohio-state.edu>
Message Hash: 473e79b8548e3567e3c2b3e6423c959c238d56b13b49f106811cc6d73057bc62
Message ID: <Pine.3.89.9508042253.A3820-0100000@jake.bga.com>
Reply To: <199508031709.NAA29005@colon.cis.ohio-state.edu>
UTC Datetime: 1995-08-05 04:03:25 UTC
Raw Date: Fri, 4 Aug 95 21:03:25 PDT
From: Nathan Zook <nzook@bga.com>
Date: Fri, 4 Aug 95 21:03:25 PDT
To: Nathan Loofbourrow <loofbour@cis.ohio-state.edu>
Subject: Re: There's a hole in your crypto...
In-Reply-To: <199508031709.NAA29005@colon.cis.ohio-state.edu>
Message-ID: <Pine.3.89.9508042253.A3820-0100000@jake.bga.com>
MIME-Version: 1.0
Content-Type: text/plain
On Thu, 3 Aug 1995, Nathan Loofbourrow wrote:
> Nathan Zook writes:
> > > And is there any way to build trusted system out of small, verifiable
> > > pieces? Since the way they're connected could also be questioned, I
> > > suspect that when you put enough of them together it's just as bad as
> > > the case of a single, monolithic program. But this isn't my area, so
> > > I don't know.
> >
> > No. This was essentially proved during the first third of this century.
>
> Well, I haven't gotten a reply from Nathan Zook on this assertion, so
> can anyone else back it up with some references? Perhaps we're
> discussing different contexts, but proving correct systems composed of
> correct components is still a subject of active research.
>
> nathan
>
Sorry about that. Your message must have died when I splatted the dear
"Professor" (bow, bow, bow).
There is "active research". Why is a mystry to me. Godel's proof was
the completetion of several works. On of the earily demonstrated that no
axiom system can be demonstrated to consistent by a weaker one. Now the
"reasearch" in this area has consisted, in part, of translating
algorithms into statements in axiomatic systems. The problem is that
either we cannot prove that these systems are consistent or they are
extremely limited in what they can do. (In particular, recursion seems
to be anthema.) But the word proof in the previous sentence has to be
taken with a grain of salt, because any axiom system that we use to prove
things about another axiom system has to be at least as complicated.
This is why the "not a Turing machine" assertion that the "Professor" is
important. We know that Turing machine is undecidable, so if we want to
limit behavior, we can't have one. BUT---we don't know that being a
Turing machine is equivalent to having "unpredictable" behavior.
Furthermore, a "proof" of the "not a Turing machine" assertion is going
to have to be done by--you guessed it--a computer. And this computer is
running a program which definitely IS a Turing machine, if it is capable
of "proving" that other (suitably non-trivial) programs are not Turing
machines.
Why must this be done on a computer? Because the program under
consideration is thousands of machine instructions long. And each
instruction will be translated into dozens of statements in the axiom
system. So any attempted proof will be beyond human ability.
Note that the above arguments do not require the physical exsistance of
computers to make, which is why I refered to the "first third of this
century", when these ideas were discovered. In reality, the fact that
the program itself has been compiled (or was it written in machine
code?), that it uses an operating system (or does it address all of
the hardware independedly of other programs?), and runs on a processor
(maybe a 80586?) should be enough to convince serious critics of the
futility of the exercise.
But the nagging question remains: Why can't we build up big blocks from
little ones? While there is a sort of "Turing horizon" beyond which
programs are known to be unpredictable, let me attempt to address the
problem another way, to redefine our intuition to be more in touch with
reality.
The situation we are dealing with amounts to the phenomina of
"spontaneous complexity". First, some physical examples. Take an object
moving in a Newtonian space, with nothing else there. Give initial
conditions, tell me what happens next. No problem. Take two objects.
No problem. Take three objects. Big problem. Why? Perhaps we just
haven't figured out the mathematics yet. Okay, take five objects. Why
five? Because it is known that with a particular initial condition for
five objects, all objects will "leave the universe" in a finite amount of
time (!!!!!). Now what if you bump them a little bit? Certainly not all
combinations of initial conditions lead to this situation. Which is
which? Can this behavior be "built up" from two-object situations?
It is important to note that this type of complexity was in fact
discovered by Poincare' and others shortly after the turn of the
century. Some of his sketches clearly are forerunners of the Mandelbrot
set--he was considering these types of ideas. (The complexity issues
lost out first to relativity and then to quantum mechanics in the
competition for the minds of researchers.)
Then there is the Mandelbrot set--which points are in and which are out?
Are you sure? (Go ahead and limit yourself to rational points--we are
talking computers.)
Take S^1, the unit circle in the Complex plane. Define a series of
functions f_1, f_2... on S^1 as follows: f_i(z) = z^i. Each point with
rational multiple of pi argument will limit to one, but no irrational
points will. What is important to note is that there is a set uniform
set of measure 0 on S^1 such that the behavior in the limit of this set is
completely unpredicted by the behavior of the rest of the set. Perhaps
you prefer to map S^1 to S^1 by repeated applications of f_2? Then only the
binary rationals settle down.
So in each case, complex (in the technical sense) behavior is exhibited
by outlandishly simple systems. Sohow the _interactions_ of these simple
and predictable systems become unpredictable.
That is why I consider this to be a closed subject.
Nathan
Return to August 1995
Return to “Nathan Zook <nzook@bga.com>”