1998-01-13 - steganography and delayed release of keys (Re: Eternity Services)

Header Data

From: Adam Back <aba@dcs.ex.ac.uk>
To: tcmay@got.net
Message Hash: 3c804188284cc801061b522cf4474857c8adbc626a4091d2f020a144bf6143aa
Message ID: <199801130246.CAA00384@server.eternity.org>
Reply To: <v03102801b0df03d95cd4@[207.167.93.63]>
UTC Datetime: 1998-01-13 03:18:48 UTC
Raw Date: Tue, 13 Jan 1998 11:18:48 +0800

Raw message

From: Adam Back <aba@dcs.ex.ac.uk>
Date: Tue, 13 Jan 1998 11:18:48 +0800
To: tcmay@got.net
Subject: steganography and delayed release of keys (Re: Eternity Services)
In-Reply-To: <v03102801b0df03d95cd4@[207.167.93.63]>
Message-ID: <199801130246.CAA00384@server.eternity.org>
MIME-Version: 1.0
Content-Type: text/plain




Tim May <tcmay@got.net> writes:
> News spool services are already showing signs of getting into this "Usenet
> censorship" business in a bigger way. Some news spool services honor
> cancellations (and some don't).  Some don't carry the "sensitive"
> newsgroups. And so on. Nothing in their setup really exempts them from
> child porn prosecutions--no more so than a bookstore or video store is
> exempted, as the various busts of bookstores and whatnot show, including
> the "Tin Drum" video rental case in Oklahoma City.

One tactic which could protect a USENET newsgroup operator from child
porn prosecutions is if he had no practical way to recognize such
materials until after it was distributed to down stream sites.

Using steganography, we could for example adopt a strategy such as
this:

1) Cross-post, and / or post to random newsgroups 

2) Threshold secret split your posts so that only N of M are required
   to reconstruct.

3) steganographically encode the eternity traffic.  Pornographic images
   in alt.binaries.* would be suitable because there are lots of those
   already.  

4) Encrypt the original steganographically encoded posting (encrypt
   the eternity document and hide it inside the image file posted)

5) Post the decryption key a day or two later to ensure we get the full
   feed before a censor can recognize the traffic

The attacker is now forced to delay USENET posts until the key is
posted if he wishes to censor eternity articles.

Measures 1) and 2) address the problems with newsgroups not being
carried everywhere.  2) improves reliability as distribution can be
patchy.

Cancellations can be discouraged by liberal abuse of cancellation
forgeries, which a Dimitri Vulis aided greatly by providing easy to
use cancel bot software.

A worrying trend is the use of NoCeMs to filter whole news feeds,
where the NoCeM rating system I considered was designed for third
party ratings applied by individuals.  NoCeMs could become a negative
if used in this way, because news admins may use them as a tool to
censor large parts of the USENET distribution, in too centralised a
way.

> >The solution I am using is to keep reposting articles via remailers.
> >Have agents which you pay to repost.  This presents the illusion of
> 
> This of course doesn't scale at all well. It is semi-OK for a tiny, sparse
> set of reposted items, but fails utterly for larger database sets. (If and
> when Adam's reposted volumes begin to get significant, he will be viewed as
> a spammer. :-) )

The best criticism of my eternity design to date!  I agree.

But this limitation is difficult to avoid while retaining the level of
availability.  Trade offs improving efficiency will tend to move away
from an existing widespread broadcast medium (USENET) towards
specialised protocols, and pull technology (the web hosting model),
leading to actual machines serving materials.

We can probably arrange that these servers do not know what they are
serving, however if the whole protocol is setup specifically for the
purpose of building an eternity service, it will be shut down.

Longer term perhaps something could be achieved in slowly building up
to larger numbers of servers, but this does not seem such a
main-stream service that it would be easy to get this degree of
uptake.

That is to say this problem is more than designing protocols which
would be resilient _if_ they were installed on 10,000 servers around
the world; the problem is as much to do with coming up with a
plausible plan to deploy those servers.

Adam






Thread