1995-09-21 - Re: Entropy vs Random Bits

Header Data

From: David Van Wie <dvw@hamachi.epr.com>
To: jamesd <jamesd@echeque.com>
Message Hash: 4dc0c16bb22221d2778d329a89bf61480c2203507447ade980f0a7eaa219355c
Message ID: <306112E3@hamachi>
Reply To: N/A
UTC Datetime: 1995-09-21 07:25:01 UTC
Raw Date: Thu, 21 Sep 95 00:25:01 PDT

Raw message

From: David Van Wie <dvw@hamachi.epr.com>
Date: Thu, 21 Sep 95 00:25:01 PDT
To: jamesd <jamesd@echeque.com>
Subject: Re: Entropy vs Random Bits
Message-ID: <306112E3@hamachi>
MIME-Version: 1.0
Content-Type: text/plain



>Your use of the word random is incorrect:  The throw of a dice is
>random, but only contains 2.6 bits of entropy.

The throw isn't random, the data read from the die after it is thrown is 
random.  The use of the term in many of the postings I have read indicate 
the need for an "unpredictable" quantity in most cases.  This quantity may 
be drawn from a source that has entropy, but it is random.

>> why invent new terms?  Why use them to mean at least two different 
things?

>This is old term of the art, a term of information theory:  We use
>the same word because entropy in information theory has the same
>measure as entropy in thermodynamics.
>
>In both cases the entropy, measured in bits, of an ensemble of
>possible states is sum of  - P(i) * lg[P(i)] over all the possible states.

In thermodynamics, counting states in this fashion is a dicey proposition, 
but I appreciate the clarification.  Still, it seems to me that the property 
"bits of entropy" is often substituted for the actual "bits of random data" 
and is just as puzzling as gathering the "entropy of cool steam"!  One can't 
_do_ anything with a dimensionless measurement.  By which I mean, the 
measure of a property of data is not the data itself, so it still seems like 
the usage is odd, at times.  However, your explanation does address some of 
the phrases I have seen.

Does this mean that entropy is conserved in information theory?

dvw





Thread