1996-04-16 - Re: why compression doesn’t perfectly even out entropy

Header Data

From: Wei Dai <weidai@eskimo.com>
To: “Timothy C. May” <tcmay@got.net>
Message Hash: 778bd930d48a26dbba5f1bfee149eb539254da561066ad0603a8e637143726d3
Message ID: <Pine.SUN.3.93.960415135928.14546B-100000@eskimo.com>
Reply To: <ad931eee1502100491ce@[205.199.118.202]>
UTC Datetime: 1996-04-16 01:52:32 UTC
Raw Date: Tue, 16 Apr 1996 09:52:32 +0800

Raw message

From: Wei Dai <weidai@eskimo.com>
Date: Tue, 16 Apr 1996 09:52:32 +0800
To: "Timothy C. May" <tcmay@got.net>
Subject: Re: why compression doesn't perfectly even out entropy
In-Reply-To: <ad931eee1502100491ce@[205.199.118.202]>
Message-ID: <Pine.SUN.3.93.960415135928.14546B-100000@eskimo.com>
MIME-Version: 1.0
Content-Type: text/plain


On Thu, 11 Apr 1996, Timothy C. May wrote:

> That there can be no simple definition of entropy, or randomness, for an
> arbitrary set of things, is essentially equivalent to Godel's Theorem.

I think what you mean is that there is no simple way to measure
randomness.  Simple, nice definitions of randomness do exist.  Actually
there are two closely related definitions of randomness.  The first is
entropy, which is a measure of the unpredictability of a random variable.
(A random variable is a set of values with a probability assigned to each
value.  In this case the values are bit strings.)  The second is
algorithmic complexity, which is a measure of the uncompressibility of a
bit string.  Notice that it doesn't make sense to talk about the entropy
of a string or the algorithmic complexity of a random variable.

Unfortunately both of these values are very difficult to measure.
Algorithmic complexity is provably uncomputable, because given a string
you can't tell when you have found the best compression for it.  Entropy
can in principle be determined, but in practice it's hard because you must
have a good probability model of the mechanism that generates the random
variable.

What we want to do is calculate the entropy of the output of a physical
random number generator.  Now if we have a probability model of the rng,
then we're home free.  For example, if the rng is tossing a fair coin 64
times, then it's easy to calculate that the entropy is 64 bits.  

But what if the rng is too complex to be easily modeled (for example if
it's a human being pounding on the keyboard)?  Algorithmic information
theory says the entropy of a random variable is equal to its expected
algorithmic complexity.  So if we could calculate algorithmic complexity,
then we can estimate the entropy by sampling the output of the rng many
times, calculate the algorithmic complexity of each sample, and take their
average as the estimated entropy.

Unfortunately, we already know that algorithmic complexity is NOT
computable.  The best we can do, and what is already apparently done in
practice, is to find an upper bound (call it x) on the algorithmic
complexity of a string by trying various compression schemes, divide that
number by a constant (say 10), and use x/10 as a conservative estimate of
the algorithmic complexity.

(Tim, I know you already understand all this, but your earlier
explanation wasn't very clear.  I hope this helps those who are still
confused.)

> (To forestall charges that I am relying on an all-too-common form of
> bullshitting, by referring to Godel, what I mean is that "randomness" is
> best defined in terms of algorithmic information theory, a la Kolmogorov
> and Chaitin, and explored in Li and Vitanyi's excellent textbook,
> "Algorithmic Information Theory and its Applications.")

A year ago, you recommended me a book by the same authors titled _An
Introduction to Kolmogorov Complexity and Its Applications_.  Have the
authors written a new book, or are these the same?

Wei Dai






Thread