1994-11-29 - Re: Transparent Email

Header Data

From: eric@remailer.net (Eric Hughes)
To: cypherpunks@toad.com
Message Hash: 88606d1226ea1b2b22363eb89dfb47ae459d25a7f7d821dcdac81f48986cc402
Message ID: <199411290554.VAA02536@largo.remailer.net>
Reply To: <199411282330.RAA00186@omaha.omaha.com>
UTC Datetime: 1994-11-29 04:55:57 UTC
Raw Date: Mon, 28 Nov 94 20:55:57 PST

Raw message

From: eric@remailer.net (Eric Hughes)
Date: Mon, 28 Nov 94 20:55:57 PST
To: cypherpunks@toad.com
Subject: Re: Transparent Email
In-Reply-To: <199411282330.RAA00186@omaha.omaha.com>
Message-ID: <199411290554.VAA02536@largo.remailer.net>
MIME-Version: 1.0
Content-Type: text/plain


   Ok, I should start off by saying I'm not sure I followed everything Eric 
   said in his post, so this might not be a great answer to him.

Well, I didn't address everything in your post, either.  Does that
make us even?

   My posts were predicated on the assumption that transparent encryption 
   and signatures are worthwhile and necessary.

Well, yes, I certainly agree.

My point about key distribution, partly, that you don't need to solve
it before you get a basic system.  Separation of key distribution and
encryption allows you to implement the encryption seamlessly and do
the key management by hand.  Since use of keys is more frequent than
distribution, you can make a big win by getting the encryption working
right first.

   I think we ought to be moving in that direction, for two reasons.  The
   first is that most people -- including most of us -- aren't willing to do
   much work in order to sign and encrypt our email traffic.

I am still considering the "sign-or-delay" proposal for the toad.com
server, that is, sign your articles to the list or they'll be delayed
and eventually rejected.

   > This approach of generality,
   > however, is notably more complicated than a world where responsibility
   > for security is partitioned, where  each user does not have to worry
   > about all the possible systemic security issues.

   I understand this criticism.  But if we abandon generality, I don't think 
   we can achieve transparency.

The generality I was referring to was non-locality, where decisions
taken remotely by other persons must be considered by the user.  The
analogy in programming languages is scoping, i.e. global vs. local
variables.

   But the whole point of the system is that 
   there is no need for the two correspondents to worry about exchanging 
   keys:  it all happens automatically.  

I think this is exactly the wrong approach if you want rapid deployment.

Case in point--PEM.  The PEM folks had basic encryption down pretty
quickly and then spent years (like two or three times as many)
figuring out key distribution.  And the key distribution mechanism
they came up with has political problems and very few people use it.
Had PEM released an initial RFC with just encryption etc. in it when
they were done with it, we'd all be using PEM today.  We aren't.

PGP is used more than PEM because it's key distribution system allowed
you to use uncertified keys.  PGP isn't used much because it
integrates so poorly with other software.  PGP insists upon doing
every goddamn thing it knows how to do whenever you invoke it.  I tell
PGP to process a message, not to decrypt it.

How to do encryption and decryption is mechanism.  How I decide what
keys I trust is policy.  Separation of mechanism and policy is a good
thing.  (Good defaults for policy also help.)

A package which has this right--swIPe.  The initial swipe code works,
and all it does is encryption.  Right now you have to do key
management manually.  That's OK, because that can be another
subsystem.

   On the low end, we have a default web of trust, which is 
   sort of crummy because it's not terribly difficult to spoof.  

   But my goal was to meet this criticism by making the system open to other 
   webs, and to place as few restrictions as possible on people who want to 
   create and use falternative webs.

My point is that you don't need webs at all.  They have their uses, to
be sure, but they aren't the last word in key distribution that
they're often made out to be.  Bilateral distribution of keys for
electronic-only communication can work out just fine, providing enough
different communications channels are available.  There was a post I
made last year about the email provider signing keys which is relevant
here.  (If someone could repost it, ...)

   I didn't 
   mean to imply that all public keys ought to be on the default web.  I 
   meant that you ought to be able to get *a* public key for an aribitrary 
   address from the default web.

The publication of a key, however, reveals the _existence_ of that
arbitrary address.  On the other hand, if that address sends a
message, then the public key should be available to those who see it.
For Usenet participation, for example, a default key repository is
useful and does not affect forward secrecy, which has already been
compromised by posting a public message with signature.

   Basically, it comes down to this:  in a transparent system, if you want 
   to mail me, somehow your mailer will have to get a copy of my key without 
   your doing anything about it.  

That's a good final goal, but I really think it ought not to be
included in the first subgoal.

There are substantial problems with achieving both transparent key
access from a single mailer and assurance against that mailer being
spoofed.  All such solutions seem to require global, non-partitionable
information, making the problem difficult, not insurmountable.  If,
though, the mailer runs on trusted hardware and has multiple links to
the outside world, automated solutions seem possible.

   The problem is:  how do we let the machine make this decision on it's 
   own, without imposing a single web of trust on users?

In my ideal view, keys should be certified by the communications
providers.  Since the comm providers are necessarily involved with
interposition attacks (it's their equipment, after all), participation
by them seems desirable and, in some sense, minimal.

Let us again restrict the problem to mappings between email addresses
and keys.  This restriction, as noted, covers a huge percentage of
real interaction.  The provider of email services has agreed to send
messages that are addressed to X to X's mailbox, without alteration.
If you get the provider to sign X's key and transmit it to the world,
then X, via another channel, can get a copy of that signed key and
verify that the provider is not interposing.

Likewise, the internet provider agrees to deliver mail addressed to
users at site Y to Y's mail daemon.  Y has the same interest in
spoofing vis-a-vis the internet provider as X does vis-a-vis Y.  The
argument is recursive, and bottoms out at the other end of the
communication link.

Clearly, an exhaustive analysis of internet protocols in terms of
these explicit promises and obligations would be enormous.  It would
also be a firm foundation for secure communications.  Nevertheless,
it's benefits might be approximated by creating provider keys and
site-signing keys.

Eric





Thread