From: Eric Hughes <hughes@soda.berkeley.edu>
To: cypherpunks@toad.com
Message Hash: 372d192237f2256929528f367cddba3e9e371e2640fd66c579ce0b29ab5a2bb1
Message ID: <9304131558.AA16178@soda.berkeley.edu>
Reply To: <9304121808.AA14458@staff.udc.upenn.edu>
UTC Datetime: 1993-04-13 16:02:00 UTC
Raw Date: Tue, 13 Apr 93 09:02:00 PDT
From: Eric Hughes <hughes@soda.berkeley.edu>
Date: Tue, 13 Apr 93 09:02:00 PDT
To: cypherpunks@toad.com
Subject: forward: cryptanalysis talk abstract
In-Reply-To: <9304121808.AA14458@staff.udc.upenn.edu>
Message-ID: <9304131558.AA16178@soda.berkeley.edu>
MIME-Version: 1.0
Content-Type: text/plain
>> Language recognition is important in cryptanalysis because,
>> among other applications, an exhaustive key search of any cryptosystem
>> from ciphertext alone requires a test that recognizes valid plaintext.
For exhaustive key search on any reasonably good symmetric cipher
(like DES), some simple entropy measure for n-bit-grams should suffice
to distinguish random from non-random. These other approaches in this
talk seem like overkill in this context. But then again, maybe we're
trying to break Enigma. :-)
>> Modeling language as a finite stationary Markov process,
A finite stationary Markov process is large fancy math-speak for what
a travesty generator does. "finite" means that the total number of
states is finite, and that means you get to use matrices instead of
kernel integrals, which means that your averagely educated scientist
can follow this. "stationary" means that the transition matrix is not
a function of time, that is, it's a constant matrix. This means that
time appears only in an exponent. A "Markov process" is a transition
from one state to another, probabilistically. (Approximately. All
these definitions are meant to explain, not to define.)
The talk looks interesting, to be sure, but it looks more significant
for making a better /etc/magic for file(1) than it does for
cryptanalysis.
Eric
Return to April 1993
Return to “Eric Hughes <hughes@soda.berkeley.edu>”