From: kevin@elvis.wicat.com
To: cypherpunks@toad.com
Message Hash: cafc2bf272cf0e9647095433d9188a21cd0d42ec5d27325897af9782a26cf490
Message ID: <9501312152.AA10208@toad.com>
Reply To: N/A
UTC Datetime: 1995-01-31 21:53:01 UTC
Raw Date: Tue, 31 Jan 95 13:53:01 PST
From: kevin@elvis.wicat.com
Date: Tue, 31 Jan 95 13:53:01 PST
To: cypherpunks@toad.com
Subject: Frothing remailers - an immodest proposal
Message-ID: <9501312152.AA10208@toad.com>
MIME-Version: 1.0
Content-Type: text/plain
It seems to me that the current remailer web suffers a fundamental flaw.
It is simply too static. When a remailer disappears, service is
disrupted and messages are lost. Humans have to statically route their
messages through the web either by hand or using relatively primitive
tools such as the chain script (not to belittle the useful work that has
been done, but it is by no means idiot proof yet). Basically, the
current web of mailers shows nothing of the dynamic nature that has kept
the internet alive and has offered us a decent chance at truly anonymous
communications, nor is it easy to use to its full potential.
Consider a more dynamic web of remailers. I envision remailers that
actively advertise their presence on the web so that all active
remailers are aware of all other active remailers. This advertising is
to have very low latency so that a new mailer can be known to the web
within minutes (I will address the implementation of this later). Thus,
remailers can constantly be appearing and disappearing without impact on
the web as a whole (I refer to this dynamic web of remailers as a
"froth"). Imagine also that remailers are allowed to dynamically perform
the routing functions that are currently done statically offline (for
reasons I will discuss shortly).
Now, given all this, what do we gain? First, consider my situation
(which is what started me on this line of thinking); my company has a
leased 56K internet link. I cannot consume this valuable bandwidth with
a remailer during business hours. However, if remailers could pop up and
announce their presence to the web, I could run a transient remailer for
several hours in the wee hours of the morning when the line use is
minimal.
Another benefit would be the possibility of remailers with no fixed
physical location. A remailer can shift from host machine to host
machine daily, as long as there is a decent mechanism to locate it.
The use of such transient routers implies allowing dynamic routing. If
any given remailer may go down or move at any point, it is impractical
to expect users to keep track of which are up at the moment and create
static routes in the current manner. The only reasonable solution I have
come up with is to allow the remailers themselves to choose routing,
given that they have full knowledge of the current state of the froth.
This also confers additional benefits. It has been pointed out that most
traffic across the web is currently unencrypted. It seems to me that any
such message should be fair game. The remailer should be allowed to
encrypt the message and pick a pseudo-random path through the web,
incorporating transient remailers along the way. This adds greatly to
the encrypted traffic on the net, as well as making sure that the
transient remailers are used. It even provides some degree of privacy to
originally non-encrypted messages.
We could of course still allow users to statically route their messages
through the web, or allow combinations (I want this message routed
through remailers A and B to endpoint C; however, remailers are granted
authority to route this message through a maximum of three additional,
randomly chosen remailers between each step). This has the added
advantage of taking the work of routing messages off the user (unless
they really want it). Think about the proposed extension to MixMaster to
allow separate parts of a multi-part message to be routed separately,
and consider whether you really want to have to do this by hand. I
strongly suspect that most messages are currently routed via boilerplate
scripts, which has to make the job of traffic analysis much easier for
our good friend Eve.
By the way, a brief rant on a related topic; people speak of not
trusting remailers any further than necessary, while I am clearly
suggesting granting more authority and trust to the remailers. This
notion of not assigning trust is simply nonsense. When you send a piece
of mail to a remailer, encrypted or not, you are assigning complete
trust in that remailer to keep you anonymous and not to forward your
mail to the NSA immediately.
This does lead to a related problem, however; if we allow remailers to
pop up at random and join in the froth, how do we know that Deitweiller
won't set up a number of black hole remailers that take your mail and
throw it away, disrupting the froth, or forward it to nphard@nsa.gov?
Fortunately, we already have the PGP web of trust model in place and can
use it to good effect in this case. Remailers should simply not route
mail through any remailer whose public key is not trusted unless
explicitly ordered otherwise. This requires remailer operators to
cooperate to some extent to validate one another's remailer keys, but
does confer the great advantage of portable remailers as mentioned
above; if I run a trusted remailer on one machine, I can move it to
another machine, and as soon as I advertise the new address and the PGP
public key, it is a trusted and useful part of the froth.
While we are advertising a PGP key and internet address, we might as
well incorporate other useful information. For instance, remailers could
advertise their maximum latency. This would allow us to send messages
into the froth with the instructions "Keep this message moving for at
least one hour and at most three hours and then deliver to endpoint C"
and allow remailers to make informed decisions about routing this
message (there are some interesting issues in making routing decisions;
if we can assign a cost to the link between each pair of remailers, do
we want to attempt to optimize the route for least cost? Or stick with
random routing to attempt to hinder traffic analysis?). I'm sure that we
can come up with other useful information to assist routing decisions
(available bandwidth, cost per letter, protocols supported, etc.)
Now, the question of implementing remailer advertising. If there is an
existing internet protocol for advertising entities with sufficiently
low latency, I am not aware of it (my background is in Novell and OS/2,
so I'll happily be corrected!) DNS is the closest model to what I want,
but is excessively tied up in bureaucracy and has horrific propagation
times. Thus, we will have to roll our own.
I can think of two reasonable solutions to this problem. The first is a
central authority model: there exist well known servers that each
remailer has to report into when starting up and shutting down. This
model has obvious benefits (ease of use and implementation, minimal
bandwidth usage) and obvious drawbacks (a single point of failure, or at
best very few, distinctly against the cypherpunk philosophy, requires
high-bandwidth, stable servers (read expensive)).
The second solution involves a broadcast model; each remailer
periodically broadcasts its presence to the entire net (say every T
minutes). Any process wanting a full list of all remailers simply has to
monitor the broadcast channel for T minutes to get at least a good
approximation. The problem here is one of delay (to locate a remailer, I
might have to wait T minutes) and bandwidth choking. Obviously, spamming
the entire internet with UDP packets advertising remailers would earn us
no friends (I'm not even sure that it's technically feasible). What we
need is a net of machines to carry the broadcast messages with numerous
well known access points.
By happy coincidence, there exists just such a network (actually,
several of them) that we can subvert for our twisted purposes. It's
called IRC. We can create a dedicated channel that carries the remailer
broadcast messages (to be honest, I have no real idea how to do this, but
one of the wonderHackers out there surely does? otherwise I might
actually be forced to read the RFCs (oh horror!)). I doubt that any
remailer operator will have trouble finding an IRC host to attach to in
order to monitor the broadcasts.
This is obviously a stopgap solution, but provides a method for
jumpstarting the froth without requiring a froth to advertise the froth.
With regard to the bandwidth concerns, I can't imagine that less
than a hundred remailers generating <1K messages every few minutes
(about ten minutes feels like a good value for T) can bring IRC to its
knees. When we get more than a hundred or so remailers up, we'll worry
about bandwidth!
And to the inherent latency problem of the broadcast model, the easiest
solution is to have well known servers cache the advertising information
for immediate access. Not too coincidentally, these caching servers look
a lot like the central point of authority servers in the first solution.
Thus, we can have both the convenience of a central authority model with
the dependable broadcast model to fall back on if it becomes necessary.
Unsolved problem, by the way: it would obviously be nice to have
human-readable names for remailers (the PGP key becomes the true unique
identifier, but I'd much rather pick "Soda remailer" from a list than
four lines of armored ASCII). How can we guarantee unique names and/or
prevent problems resulting from collisions?
Obviously, this process hasn't reached more than the pipe-dream stage
yet. I am very interested in comments and proposals before I start
trying to create some trial implementations. Be gentle, though - it's my
first time.
--
Kevin
Return to January 1995
Return to “Robert Rothenberg <rrothenb@ic.sunysb.edu>”