From: “Timothy C. May” <tcmay@got.net>
To: “cypherpunks@toad.com>
Message Hash: 522f78d45ac39f1b73f61eb1a15782b66bab854d2e7c405989f50007718d5cc2
Message ID: <v03007809ae7af74d82e0@[207.167.93.63]>
Reply To: <01BBB163.FC317940@geeman.vip.best.com>
UTC Datetime: 1996-10-05 01:44:55 UTC
Raw Date: Sat, 5 Oct 1996 09:44:55 +0800
From: "Timothy C. May" <tcmay@got.net>
Date: Sat, 5 Oct 1996 09:44:55 +0800
To: "cypherpunks@toad.com>
Subject: Re: DESCrack keyspace partitioning
In-Reply-To: <01BBB163.FC317940@geeman.vip.best.com>
Message-ID: <v03007809ae7af74d82e0@[207.167.93.63]>
MIME-Version: 1.0
Content-Type: text/plain
At 7:48 PM -0700 10/3/96, geeman@best.com wrote:
>What about the heuristics of partitioning the keyspace?
>
>Seems to me that a _subset_ of all possible keys is much more likely
>to appear than a random selection from an equidistributed population 0..2^56.
>
>(P)RNG's just aren't that likely to produce a key of 010101010.....
>nor 001100110011... etc etc and I have been thinking about how one might
>formalize
>and exploit this randomness property to increase the probability of
>finding the key sooner.
A PRNG is of course "just as likely" to produce 01010101010101...010101 as
it is to produce, say, "01110011101010....010100001010001001010 or any
other of the possible seequences. The key is that 01010101010101...010101
is a "special sequence," just like a "royal flush" is a "special hand" in
poker.
The formalism for this is "algorithmic information theory," or "descriptive
complexity theory," developed more or less independently by Kolmogoroff,
Chaitin, Martin-Lof, and others, mostly in the 1960s. The core idea is
this: "a number is random if it has no shorter description than itself."
That is, a random number is not compressible.
In the example above, the sequence "01010101010101...010101" has a simple
generating algorithm: "alternating 0s and 1s," and this is a shorter
description than the actual sequence. Naturally, there are relatively few
sequences with short descriptions (translation: "nearly all" sequences
"look to be random").
(One of the exciting conclusions is that no number can ever be _proved_ to
be random, via the Halting Problem, so I use the very term "random" with
this in mind. Substitute a more nuanced "a number believed to be random"
for every occurrence of "random." We can say that a number was generated
from, say, what we believe to be a "naturally random" process, e.g.,
radioactive decay, but we cannot prove the number is random. We can of
course prove it to be nonrandom if we can find a generator which
consistently generates it. This is the essence of von Neumann's comment
that the output of a PRNG algorithm cannot be random.)
These ideas are closely related to notions of entropy, compressibility,
etc. The standard works in English are by Gregory Chaitin, at IBM. He has
several books out, including a collection of his articles, "Information and
Randomness." Also, several readable articles in "Scientific American."
He also has an extensive Web site, last I looked, so some searches on his
name and/or algorithmic information theory should produce good results.
--Tim May
"The government announcement is disastrous," said Jim Bidzos,.."We warned IBM
that the National Security Agency would try to twist their technology."
[NYT, 1996-10-02]
We got computers, we're tapping phone lines, I know that that ain't allowed.
---------:---------:---------:---------:---------:---------:---------:----
Timothy C. May | Crypto Anarchy: encryption, digital money,
tcmay@got.net 408-728-0152 | anonymous networks, digital pseudonyms, zero
W.A.S.T.E.: Corralitos, CA | knowledge, reputations, information markets,
Higher Power: 2^1,257,787-1 | black markets, collapse of governments.
"National borders aren't even speed bumps on the information superhighway."
Return to October 1996
Return to ““Vladimir Z. Nuri” <vznuri@netcom.com>”