1993-11-30 - Re: really hiding encrypted data

Header Data

From: Eli Brandt <ebrandt@jarthur.Claremont.EDU>
To: cypherpunks list <cypherpunks@toad.com>
Message Hash: 031f5d1fce51b96676b04228c0609131b415db29d29da04597ad777afc3dcd1d
Message ID: <9311300502.AA24972@toad.com>
Reply To: <9311300120.AA12755@bilbo.suite.com>
UTC Datetime: 1993-11-30 05:07:18 UTC
Raw Date: Mon, 29 Nov 93 21:07:18 PST

Raw message

From: Eli Brandt <ebrandt@jarthur.Claremont.EDU>
Date: Mon, 29 Nov 93 21:07:18 PST
To: cypherpunks list <cypherpunks@toad.com>
Subject: Re: really hiding encrypted data
In-Reply-To: <9311300120.AA12755@bilbo.suite.com>
Message-ID: <9311300502.AA24972@toad.com>
MIME-Version: 1.0
Content-Type: text/plain


> From: jim@bilbo.suite.com (Jim Miller)
> I suspect it is easy to distinguish between the collection of least
> significant bits of a normal picture file and the collection of
> least significant bits of a picture file used to hold some
> encrypted data.

I wrote something about this just a mont or two ago.  Rather than
going through it all again, let me summarize and go off in a
different direction.  Yes, simple-minded LSB steganography should be
detectible.  Its statistical effect is to stomp hard on the lowest
bit with white noise, while doing nothing to higher bits.  This
isn't a very plausible noise source.  I've been hoping to find some
time over winter break to brush up on my statistics and put together
a steganography detector.  This sort of analysis might not hold up
in court, as it's always possible that somebody has a bogus ADC or
something, but it's fine for traffic analysis.  I think the trick
will be avoiding false positives on images that have been dithered
at some point during their life...

> This is probably a stupid question, but...is there anyway to take a
> chuck of encrypted data (presumably with a high degree of
> randomness) and securely munge it so it looks less random, while
> retaining the ability to reverse the munge and decrypt the data.

You could hit only scattered bits, but this sort of noise isn't
realistic either.  What you want is to end up with plausible
statistics.  One possibility is to construct a model for the
less-significant planes of the types of images (or other data) which
you intend to use.  If you leave a parameter or two free, or
partially free, you should be able to fit some data in without being
blatant about it.  Low data rate, though.  Constructing a decent
data model for this purpose is beyond me.

A simple approach: add plenty of Gaussian noise, and maybe introduce
some moire crud to make it look lousy.  Then replace every n'th LSB
with a bit of your choice.  This should be plausible enough to past
most auto-scanners, who probably can't afford to get too many false
positives.

>  Jim_Miller@suite.com

   Eli   ebrandt@jarthur.claremont.edu
	 PGP 2 key by finger or e-mail
"They have written customized software for pseudospoofing and style 
 analysis for cyberspatial warfare across the many lists."  -- L. Detweiler





Thread