1994-07-28 - Remailer ideas (Was: Re: Latency vs. Reordering)

Header Data

From: jamiel@sybase.com (Jamie Lawrence)
To: cypherpunks@toad.com
Message Hash: fb78ec6fa9613127945b0f47142cbee3d1da9178dfcdd72b2d3857a5aa88e9fc
Message ID: <9407281831.AB19187@ralph.sybgate.sybase.com>
Reply To: N/A
UTC Datetime: 1994-07-28 18:32:45 UTC
Raw Date: Thu, 28 Jul 94 11:32:45 PDT

Raw message

From: jamiel@sybase.com (Jamie Lawrence)
Date: Thu, 28 Jul 94 11:32:45 PDT
To: cypherpunks@toad.com
Subject: Remailer ideas (Was: Re: Latency vs. Reordering)
Message-ID: <9407281831.AB19187@ralph.sybgate.sybase.com>
MIME-Version: 1.0
Content-Type: text/plain


I was thinking some about remailers and means to create more
effective ones. I think the idea of padding messages has been
kicked around (has anyone implemented it?), but what about random
compression? Some messages are compressed, others are padded, some
are left alone, perhaps shooting for a median message size
(everything coming from this mailer tries to be 9k, or as close as
possible). Of course, this requires a standard so that other
remailers downstream can make the message readable.

Another thing that occured to me is the thought that if there were
an organized web or remailers, remailers could bounce messages
between them automatically- a header could identify the number of
bounces perhaps, I haven't thought too much about the implications
of doing so, but if every message through the web bounced around
30 times with reordering, padding/compression, PGP, etc. then
traffic analysis would be pretty damn hard, I would think, even
for someone monitoring the entire web of remailers' traffic.

This all assumes that:

- remailers can agree on a standard for the above needed features

- a semireliable web of remailers can be maintained

- some method fordealing with denial of service attacks can be
found (a coredump sent to the web could play all sorts of hell, as
could an 'evil' remailer that sneaks in and changes the
how-many-times-through identifier).

The third problem could be delat with by deciding on a size limit-
if a message is over 65k (or whatever) it is bounced- if you're
sending something big, split it.

The first one could probably be done- if someone (grin- if I find
any time soon, this is a project I'd like to do) wrote a nice
package that was easy to install and use with a feature set that
could be agreeable to most.

The second one is the problem, but could be dealt with by the
first by establishing automated communication- when someone
installs the package, send a control message another remailer
already part of the web which 'registers' it, and then the web
consistently tries to maintain itself by checking on the others
and dropping ones that go down off the list. Some sort of method
would have to be found for ones that drop off then later come
online again so that control messages didn't have to be manually
initiated every time, but that shouldn't be that hard.

What are the problems in the above?

Would Perl be a good choice for doing this?

I saw some code from a remailer some time ago, but lost my mailbox
a while back (which could also mena that this is a dry rehash of
an old discussion... apologies if I am rewriting someone elses
thoughts). Anyone still have this?

Am I talking out my ass?


-j
--
"Blah Blah Blah"
___________________________________________________________________
Jamie Lawrence                                  <jamiel@sybase.com>






Thread