1995-09-21 - Entropy vs Random Bits

Header Data

From: David Van Wie <dvw@hamachi.epr.com>
To: “‘cypherpunks’” <cypherpunks@toad.com>
Message Hash: c88aab09a81bdc72a9534705a108e242090502e39d3c6b27472e237a47e784a3
Message ID: <3060FDCD@hamachi>
Reply To: N/A
UTC Datetime: 1995-09-21 05:55:00 UTC
Raw Date: Wed, 20 Sep 95 22:55:00 PDT

Raw message

From: David Van Wie <dvw@hamachi.epr.com>
Date: Wed, 20 Sep 95 22:55:00 PDT
To: "'cypherpunks'" <cypherpunks@toad.com>
Subject: Entropy vs Random Bits
Message-ID: <3060FDCD@hamachi>
MIME-Version: 1.0
Content-Type: text/plain



I've been watching the debate and discussion unfold on usable sources of 
random data from environments, user actions, etc.  I have a vocabulary 
question (and something of a bone to pick as a mathematician and physicist). 


Usually, the term "entropy" is being used to characterize one of two 
different things: (i) random data, as in "300 bits of entropy," and (ii) the 
"randomness" of data (i.e. high degree of variance in a statistic drawn from 
it), as in "you can find a lot of entropy in the low order bits of a timed 
interval between keystrokes."  I suspect that there are other shades of 
meaning intended in other uses as well.

This is odd.  The term entropy describes an aspect of thermodynamic 
equlibrium in physical systems.  Although sometimes used as a synonym for 
"random," that definition is vernacular, not technical.  In fact, there is 
no meaningful relationship between "entropy" and random data of the type 
described in the postings related to seed values.  In the presense of a 
perfectly suitable and precise mathematical term (i.e. random), why invent 
new terms?  Why use them to mean at least two different things?

dvw





Thread