1995-11-22 - Re: towards a theory of reputation

Header Data

From: Wei Dai <weidai@eskimo.com>
To: Hal <hfinney@shell.portal.com>
Message Hash: 9f88c1ed71a0325270d50e6d588b2b8c2eb83092b763409629926ea19f2ac1d7
Message ID: <Pine.SUN.3.91.951121223454.2539A-100000@eskimo.com>
Reply To: <199511212332.PAA24563@jobe.shell.portal.com>
UTC Datetime: 1995-11-22 12:41:12 UTC
Raw Date: Wed, 22 Nov 1995 20:41:12 +0800

Raw message

From: Wei Dai <weidai@eskimo.com>
Date: Wed, 22 Nov 1995 20:41:12 +0800
To: Hal <hfinney@shell.portal.com>
Subject: Re: towards a theory of reputation
In-Reply-To: <199511212332.PAA24563@jobe.shell.portal.com>
Message-ID: <Pine.SUN.3.91.951121223454.2539A-100000@eskimo.com>
MIME-Version: 1.0
Content-Type: text/plain


On Tue, 21 Nov 1995, Hal wrote:

> This is an interesting approach.  However this seems to fold in issues of
> reliability with issues of quality and value.  If I have a choice of two
> vendors, one of whom produces a product which is twice as good, but there
> is a 50% chance that he will abscond with my money, I am not sure how to
> value him compared with the other.  It seems like the thrust of the
> analysis later is to determine whether people will in fact try to
> disappear.  But that is not well captured IMO by an analysis which just
> ranks people in terms of "utility" for the price.

Our intuitive notion of reputation combines the issues of reliability and
quality.  In your example, whether you choose the reliable vendor or the
unreliable one depends on whether you are risk-seeking or risk-averse. 
You must prefer one or the other or be indifferent.  In general how you
make these choices depend on your values and your expectations of what the
vendors will do, which include both expectations of reliability and 
expectations of quality.

Can you elaborate more on why the analysis is inadequate?  (I know it 
probably isn't adequate, but why do you think so?)

> I am not sure about this last point.  It seems to me that a good
> reputation is one which is most cost-effective for its owner.  Whether it
> is good for social stability is not relevant to the person who is
> deciding whether to use it.  ("But what if everyone behaved that way?
> How would you feel then?")  It may be nice for the analyst but not for
> the participant.

Right, I'm speaking from the point of view of the analyst when I say
"good", but it also applies to individual participants.  Each person does
what he thinks is in his best interest, but if this turns out to be
unstable for the reputation system as a whole, then it won't last very
long so there is little point in getting involved in the first place.  In
other word, I would not choose to participate in an unstable reputation
system. 

> I don't really know what the first one means.  There are a lot of
> different ways I can behave, which will have impact on my reputation, but
> also on my productivity, income, etc.  There are other ways I can damage
> my reputation than by cheating, too.  I can be sloppy or careless or just
> not work very hard.  So the first two are really part of a continuum of
> various strategies I may apply in life.  The second is pretty clear but
> the first seems to cover too wide a range to give it a value.

You are right that there is continuum of strategies, but I assume there is
a discontinuity between completely throwing away your reputation and any
other strategy.  So operating value is the maximum amount of profit you
can make by optimizing among all other strategies except disappearing. 

> It would be useful to make some of the assumptions a bit clearer here.
> Is this a system in which cheating is unpunishable other than by loss of
> reputation, our classic anonymous marketplace?  Even if so, there may be
> other considerations.  For example, cheating may have costs, such as
> timing the various frauds so that people don't find out and extricate
> themselves from vulnerable situations before they can get stung.  Also,
> as has been suggested here in the past, people may structure their
> interactions so that vulnerabilities to cheating are minimized, reducing
> the possible profits from that strategy.

When I wrote the original post I was thinking of the classic anonymous 
marketplace.  But I think it can apply to other types of markets.  
Cheating costs can be easily factored into the throw-away value, and 
an important question for any theory of reputation to answer is how 
to structure transactions to minimize this value.  Many more assumptions 
need to be made in modeling a particular reputation system, but I was 
trying to list some general properties that might apply to all reputation 
systems.

> It might be interesting to do something similar to Axelrod's Evolution
> of Cooperation, where (human-written) programs played the Prisoner's
> Dilemma against each other.  In that game, programs had reputations in
> a sense, in that each program when it interacted with another
> remembered all their previous interactions, and chose its behavior
> accordingly.  The PD is such a cut-throat game that it apparently
> didn't prove useful to try to create an elaborate reputation-updating
> model (at least in the first tournaments; I understand that in later
> versions some programs with slightly non-trivial complexity did well).

The tit-for-tat program that won both contests uses an extremely simple 
reputation algorithm -- it expects the next action of the other player 
to be the same as the last action.  This is an example of what I called a 
"good" reputation algorithm.  It serves the self-interest of the entities
that use it; it is cheap to use; when widely used the system is stable.

Wei Dai






Thread