From: doug@netcom.com (Doug Merritt)
To: cypherpunks@toad.com
Message Hash: 705f5f45355753185735f68ede66412a36e044bca96b66e3cf72b6df3765c8e7
Message ID: <9311040435.AA23042@netcom5.netcom.com>
Reply To: N/A
UTC Datetime: 1993-11-04 04:37:34 UTC
Raw Date: Wed, 3 Nov 93 20:37:34 PST
From: doug@netcom.com (Doug Merritt)
Date: Wed, 3 Nov 93 20:37:34 PST
To: cypherpunks@toad.com
Subject: trusting software
Message-ID: <9311040435.AA23042@netcom5.netcom.com>
MIME-Version: 1.0
Content-Type: text/plain
How does one know that one can trust the software that one is using
on one's own machine for encryption, mailing, etc, or worse yet, how
can one know whether to trust the software doing anonymous or other
remailing on other machines? Web-of-trust schemes are only statistically
reliable due to these concerns.
These are rhetorical questions; the point is, I just realized that I
didn't explain myself last month when I talked about an algorithm for
verifying *intentions*. A number of people emailed me to complain that
authentication should be a matter of establishing a person's *real*
identity -- a valid issue, but I was off on a tangent and neglected
to explain my actual point:
Imagine you have a single piece of software which runs a dcnet over
the internet by being instantiated on many nodes. Imagine that you're
concerned that the NSA or someone will spoof a whole bunch of nodes,
pretend to be the Real Software (which ordinarily helps guarantee
anonymity, defeat traffic analysis, etc), but actually works to defeat
the Real Software and the people who use it.
One would like to somehow guarantee that when one talks to remote software
as part of a web of trust scheme, that the software really is the One and
Only True Software, and not some deceitful counterfeit.
It is in *this* connection that one might wish to authenticate the
unique identity of multiply instantiated *software* by a hypothetical
process which ascertains the *intentions* of that software instantiation.
I previously phrased this as if it were a person that the hypothetical
algorithm was authenticating, leading to understandable objections. Apologies;
I had gotten into a digressive train of thought about using it with people
before I posted, and it's taken me this long to realize that I never
communicated clearly.
I still haven't described the algorithm ("this margin is too narrow" :-),
but I hope it's more clear that such an algorithm is potentially more
realizable for software than it would be for people.
Doug
Return to November 1993
Return to “doug@netcom.com (Doug Merritt)”
1993-11-04 (Wed, 3 Nov 93 20:37:34 PST) - trusting software - doug@netcom.com (Doug Merritt)