1996-10-22 - Re: one-body, one-cert

Header Data

From: Hal Finney <hal@rain.org>
To: cypherpunks@toad.com
Message Hash: 5fd5579969c156a0197725c5067378f2b8f1f9129af41b772a65156cd7ddf16d
Message ID: <199610221611.JAA27979@crypt>
Reply To: N/A
UTC Datetime: 1996-10-22 16:12:04 UTC
Raw Date: Tue, 22 Oct 1996 09:12:04 -0700 (PDT)

Raw message

From: Hal Finney <hal@rain.org>
Date: Tue, 22 Oct 1996 09:12:04 -0700 (PDT)
To: cypherpunks@toad.com
Subject: Re: one-body, one-cert
Message-ID: <199610221611.JAA27979@crypt>
MIME-Version: 1.0
Content-Type: text/plain


[This is picking up a discussion from the SPKI list on the topic of
anonymous credentials.  I think cypherpunks is a more appropriate forum.]

From: Carl Ellison <cme@cybercash.com>
> Yes.  This protects me from the evil CA.  However, the original posting
> person wanted to make sure that once he had seen bad behavior on my part
> (where "I" am identified by my DNA) that every future use of any key by me
> will gain me no access to his service.
>
> I believe these two desires are fundamentally opposed -- irreconcilable.
> If someone does a bad thing with some key I'm supposed to control, then I
> want to be able to write that key off and get another one to give me
> access.  If *I* do the bad thing, then he doesn't want me ever to get
> access again.
>
> This is resolvable only if we have a way to detect the DNA behind the actor
> -- but we don't.  All we have are keys which a person controls -- until he
> doesn't.

I think in a system like this, participants would need to understand the
limitations and design the protocols with them in mind.  Ideally, getting
access to someone else's keys would be made very difficult, more so than
it is today, since the consequences could be so much more drastic.

It would be as though evil spirits roamed the world and could invade the
consciousness of careless people, taking over their bodies and forcing
them to commit horrendous acts.  In such a world we could expect people
to take whatever careful precautions are possible to avoid such threats.

Furthermore, systems which apply punishments for bad behavior would have
to be aware of the possibility of such occurances.  It would generally
not be appropriate to impose draconian consequences for a single bad act.
Rather, the possibility would always have to be considered that such
actions were not the fault of the apparent perpetrator.

We might expect to see systems in which single instances of misbehavior
are forgiven, but patterns of repetitive bad conduct are punished.  I
believe protocols similar to the ones Jim McCoy mentioned from Chaum
can provide very flexible (although possibly inefficient) means for
controlling credentials in a variety of ways.

In the context of limited-participation credentials, then, a reasonable
policy would be not to strictly cut off anyone who broke the contract,
but rather to consider extenuating circumstances.  For example, if a
credential is stolen and the owner is aware of it (say it is locked in
a hardened smart card), and he can prove that he made a good faith effort
to notify service providers that they should invalidate the credential,
then this would excuse misbehavior.  Even without this kind of evidence,
it may be appropriate to give people a limited number of second chances.

True, this allows an unscrupulous person to violate the rules some
number of times and get away with it.  However this may be tolerable
as a generally low level of background noise.  No society is perfect.
Being able to largely deter contract violations while still maintaining
privacy for the participants would in my opinion be superior to the
world we seem to be heading towards, one in which dataveillance, profiling,
and tracking are the norm.

Hal





Thread