From: Scott Brickner <sjb@universe.digex.net>
To: Hal <hfinney@shell.portal.com>
Message Hash: 052fd1c36e3fe06008f675c4c6a3c84caebef52374a2ffa4cd5843e8957c11f0
Message ID: <199512052222.RAA07434@universe.digex.net>
Reply To: <199512052029.MAA08717@jobe.shell.portal.com>
UTC Datetime: 1995-12-05 22:22:26 UTC
Raw Date: Tue, 5 Dec 95 14:22:26 PST
From: Scott Brickner <sjb@universe.digex.net>
Date: Tue, 5 Dec 95 14:22:26 PST
To: Hal <hfinney@shell.portal.com>
Subject: Re: towards a theory of reputation
In-Reply-To: <199512052029.MAA08717@jobe.shell.portal.com>
Message-ID: <199512052222.RAA07434@universe.digex.net>
MIME-Version: 1.0
Content-Type: text/plain
Hal writes:
>From: Scott Brickner <sjb@universe.digex.net>
>> Analytically, using an escrow agent doesn't change the utility
>> function. It replaces the trading partner's honesty reputation
>> estimate with the escrow agent's (which is presumably higher, or why
>> use them?). This is just a parameter substitution.
>>
>> Whence comes the intractability?
>
>By the "utility function" I was referring to Wei's model in which each
>person has an idea of how much "utility" (a general summation of
>personal value and usefulness) they would get from another person, as a
>function of cost. The utility function takes cost as input and returns
>"utiles" (or whatever) as output. So, with this model, using an escrow
>agent would change the utility function; for a given cost, the utility
>of a person to me would change (say, if the person involved were
>thought to be dishonest, then the presence of escrow agents would make
>him more useful to me). The utility function in Wei's model is a curve
>where the Y axis is utility and the X axis is cost. Changing the
>importance of honesty will change the position and shape of this
>curve.
>
>I think it would be more tractable to have a model in which honesty
>played an explicit part. We might even make assumptions about the
>mathematical relationship between honesty and overall utility - for
>example, that utility to me would be monotonically increasing with
>increased honesty of the other guy.
I had in mind that the utility function was being used by some agent to
determine its course of action. Imagine the agent trying to determine
which of several services to use. It may reasonably be expected to
evaluate the utility function for each one, and choose the one with the
highest utility. "Reputation for honesty" is one parameter to the
function. Price, turnaround, and reputation for quality are others. A
smarter agent could consider "metaservices" which bundle the given
service with an escrow agent. The net effect is to permit the agent to
replace the service's honesty with the escrow agent's for the
evaluation --- regardless of the internals of the model.
>What I mean is something like this. Let t be the degree of trust
>necessary for a business relationship to be consummated. For t=0, no
>trust is needed, and the relationship is such that neither party takes
>any significant risk - a cash sale, perhaps. For t=1, in some sense
>total trust is needed, and a party can cheat the other with 100% safety.
>
>Now let h(t) be the honesty reputation of a person, so that the utility
>which people expect to receive from them gets multiplied by h(t). For a
>person with a repuation for honesty, h(t) is close to 1 for all t. For a
>person who seems dishonest, h(t) will go from 1 to 0 as t goes from 0 to
>1.
>
>This is all pretty hand-wavy, but the idea would be to come up with good
>strategies to estimate h(t) from a person's behavior, and good ways to
>choose what kind of behavior one should follow given the value(s) of t
>which are prevalent in the market. This kind of analysis would lead you
>to focus on the importance of the amount of trust needed in a transaction.
>The underlying utility function is based on such traditional factors as
>productivity and reliability. It won't change as we consider the
>variables of our analysis, because we have factored out the honesty and
>trust issues so that they are more explicit. That's the kind of
>direction I was suggesting.
The strategy for estimating h(t) should be wholly independent of the
utility model. Otherwise you'd be effectively unable to make efficient
use of rating services, which do such evaluations as their business.
Return to December 1995
Return to “Scott Brickner <sjb@universe.digex.net>”