1994-11-28 - Re: Transparent Email

Header Data

From: Alex Strasheim <alex@omaha.com>
To: eric@remailer.net
Message Hash: a103c1ed66688090915f08fc0d2bed964ebf63956470a97bd507eecd768d69bf
Message ID: <199411282330.RAA00186@omaha.omaha.com>
Reply To: <199411282226.QAA00093@omaha.omaha.com>
UTC Datetime: 1994-11-28 23:30:02 UTC
Raw Date: Mon, 28 Nov 94 15:30:02 PST

Raw message

From: Alex Strasheim <alex@omaha.com>
Date: Mon, 28 Nov 94 15:30:02 PST
To: eric@remailer.net
Subject: Re: Transparent Email
In-Reply-To: <199411282226.QAA00093@omaha.omaha.com>
Message-ID: <199411282330.RAA00186@omaha.omaha.com>
MIME-Version: 1.0
Content-Type: text


-----BEGIN PGP SIGNED MESSAGE-----

Ok, I should start off by saying I'm not sure I followed everything Eric 
said in his post, so this might not be a great answer to him.

My proposal isn't for an all inclusive, everything to all people, 
security system.  It certainly would't preclude people from using other, 
stand alone systems, from using multiple sets of keys, or whatever else 
they wanted to do.

My posts were predicated on the assumption that transparent encryption 
and signatures are worthwhile and necessary.  By "traansparent encryption 
and signatures", I mean email systems that work and look pretty much like 
the programs we're using now -- elm and eudora, for example -- but which 
do crypto work automatically, behind the scenes.

I think we ought to be moving in that direction, for two reasons.  The
first is that most people -- including most of us -- aren't willing to do
much work in order to sign and encrypt our email traffic.  If there's any
penalty at all in terms of convenience, most people probably won't use a
secure system.  The second reason is that I believe it's only a matter of
time until someone else institutes a transparent, reasonably secure email
system.  

What would happen if Microsoft instituted a secure email system for their
online customers, but took control over keys away from users?  I think
that the result would be that everyone would embrace the new system,
because it would be a gigantic improvement over the status quo.  We would
compare the new system to an idealized vision, in which everyone has total
control over their keys, who they trust, and in which law enforcement
officials can't retrieve secret keys at will from some central repository. 
But everyone else would compare the new system to what we have now:  an
email system which is vulnerable to forging, and which isn't secure enough
to transmit credit card numbers.

I think that if we can't field an alternative, usable system, something 
that's practical and easy to use, we're going to lose by default.  I'm 
not under any delusion that what I've proposed is some kind of magic 
answer.  I'm not a heavy hitter, in a technical sense, like Eric, Hal, 
Tim, and many of the others here are.  But at the same time, I think 
there's some need for compromise.  We need a transparent system that can 
embrace people who aren't willing to put a lot of effort into security, 
but at the same time is able to accomodate people who want to take more 
trouble for the sake of their privacy.

> This whole area of key distribution has generated much confusion.  A
> perfect world is described, and then everyone is assumed to
> participate in achieving this world.  This approach of generality,
> however, is notably more complicated than a world where responsibility
> for security is partitioned, where  each user does not have to worry
> about all the possible systemic security issues.

I understand this criticism.  But if we abandon generality, I don't think 
we can achieve transparency.  And as I said before, I think a transparent 
system is going to come out on top.  It's true that what I proposed is 
complicated, but a lot of the net is pretty complicated when you take off 
the lid.  I think it could still be made usable.
 
> Proposition: You don't need to be responsible for making sure that the
> other person is being spoofed; that's their responsibility.
>
> A common situation where this proposition makes a significantly
> simpler system is exactly in the case described, where you and your
> email correspondents wish to exchange keys.  Suppose, in addition,
> that you two met online and that your only channels of communication
> are electronic.  The goal here is to create persistence of identity;
> identification with a physical body is not needed.

Actually, I wasn't trying to identify keys with physical bodies, but 
rather with email addresses.  But the whole point of the system is that 
there is no need for the two correspondents to worry about exchanging 
keys:  it all happens automatically.  People who are doing unusual 
things, like creating nyms, would of course be free to take unusual 
actions.
 
> In the PGP case you start with your own key, which you trust, then
> look for a chain of signatures to the destination.  This chain can be
> rather cumbersome to produce.  It's overkill, as well, since all you
> really needed to know is that the key was not being translated on your
> own end.  The PGP trust chain largely accomplishes that, true, but not
> as simply as possible.

I'm not sure I follow the last part of this.

> Alternatively, you save the first piece of email that you receive from
> your correspondent; it has a digital signature on it.  Now _by
> whatever means_, you obtain a public key by which to verify that
> signatures on email you receive are the same.  You yourself need to
> ensure that you aren't getting spoofed; you can do this by, say,
> having your correspondent send mail to two different locations, or by
> using a second channel to obtain the key by, or by using a PGP trust
> chain, if one is available.

Again, I go back to my goal (which wasn't stated clearly enough in my 
original posts, to be sure) of transparency, and of trying to get the 
bulk of day to day email encrypted.

> The original model for public key communications seems to have been
> one channel with an interposer.  The real world is much more
> complicated than that.  One can obtain good protection, at least as
> good as a trust chain, by crossing organizational boundaries.  The
> argument that trust chains are better because they are cryptographic
> carries no weight; the decision at each link to make a signature is of
> social, not cryptographic, character.

I agree with this 100%.  This is part of what I was trying to 
accomodate.  On the low end, we have a default web of trust, which is 
sort of crummy because it's not terribly difficult to spoof.  
Cryptographically, it's very sound, but socially, it's quite weak.

But my goal was to meet this criticism by making the system open to other 
webs, and to place as few restrictions as possible on people who want to 
create and use alternative webs.  Those alternative webs could tie email 
addresses and keys to physical persons, or to nyms, or to anything else 
they wanted.  They could be as rigid or as lax as they pleased.  And we 
as users could decide which webs we were willing to trust.

> In particular, the design of PGP that ties key management inextricably
> to encryption is bad and will contribute to an inflexibility that will
> eventually sink PGP if it is not corrected.

Could you elaborate on this?

>    Perhaps we would have 
>    a default web, which would have everyone's key in it.  
> 
> This is a really bad idea.  Some "public" keys should not be made
> public, but rather revealed only to the correspondent.  Forward
> secrecy is the reason.  If the public key has never been in the
> possession of an opponent, and assuming the results of the public key
> operation yield little or no information about the modulus, then when
> the keys are changed and destroyed, no amount of factoring can find
> the private key because the public key isn't around to factor.

You could still do this.  I did not phrase this well, and I can see where 
your concern comes from.

I have a few nyms, and I don't publish all of my public keys.  I didn't 
mean to imply that all public keys ought to be on the default web.  I 
meant that you ought to be able to get *a* public key for an aribitrary 
address from the default web.  I have used a couple of nyms over the past 
couple of years, and I haven't published those public keys or tried to 
associate them with my email adddresses.  That would be, as you pointed 
out, a bad idea.

But at the same time, I have a public key for my address here, 
alex@omaha.com, that I want to publish as widely as possible.  Right now, 
it's available via finger at astrashe@nyx.cs.du.edu.  The system I 
proposed is just an elaborate (probably too elaborate) substitution for 
getting the key via finger, with the intention of making transparent 
secure mail possible.

Basically, it comes down to this:  in a transparent system, if you want 
to mail me, somehow your mailer will have to get a copy of my key without 
your doing anything about it.  More importantly, your mailer will have to 
decide if it should trust the key it retrieves without asking you.  
Otherwise, it wouldn't be transparent.

The problem is:  how do we let the machine make this decision on it's 
own, without imposing a single web of trust on users?  That's what I'm 
trying to get at.
 
> Eric
> 

Thanks for the thoughtful response, I appreciate it.

==
Alex Strasheim | finger astrashe@nyx.cs.du.edu
alex@omaha.com | for my PGP 2.6.1. public key

-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBLtpn7xEpP7+baaPtAQHoIgP/SmOcR2a8PXEwHdF5ROfTmQ2GVxg0ZhlY
LYvUKFB+phV7RZAjlP3OCpEjchxTpzaiJFgM4+wtKulrD0ZdGfyF6iGM+K8OTAql
lWMfJ25/AvfTlqfBlZ0TAX4hkEWF5r3D65TpncgR7VOF8XErmFPPEvVCvZhx6Rd/
koZmgdTIoXg=
=vJqj
-----END PGP SIGNATURE-----




Thread