1997-04-08 - some arguments for privacy

Header Data

From: Wei Dai <weidai@eskimo.com>
To: Cypherpunks <cypherpunks@toad.com>
Message Hash: 57ce0381cb88661ded041390c8be4dca80c6688f6c4c1f06f3ba0f159af3c35f
Message ID: <Pine.SUN.3.96.970408020057.18485A-100000@eskimo.com>
Reply To: N/A
UTC Datetime: 1997-04-08 09:15:12 UTC
Raw Date: Tue, 8 Apr 1997 02:15:12 -0700 (PDT)

Raw message

From: Wei Dai <weidai@eskimo.com>
Date: Tue, 8 Apr 1997 02:15:12 -0700 (PDT)
To: Cypherpunks <cypherpunks@toad.com>
Subject: some arguments for privacy
Message-ID: <Pine.SUN.3.96.970408020057.18485A-100000@eskimo.com>
MIME-Version: 1.0
Content-Type: text/plain


A couple of weeks ago I asked for some arguments in favor of privacy.  I
pointed out that one person's increased privacy exerts negative
externalities on others by reducing their available information.  I wanted
to know why more privacy might be beneficial to society despite this
consideration.  I didn't really get the answers I wanted, probably because
I wasn't clear about the kind of arguments I had in mind.  Anyway, here I
give some arguments of my own, which I hope offer new perspectives on this
issue. 

Privacy as Insurance

Suppose you are looking for a job.  It seems reasonable to argue that if
you are a better than average worker, you would be able to get a better
offer if you had less privacy because the potential employers would be
better able to distinguish your abilities from your past history.  On the
other hand less privacy would hurt you if you are a worse than average
worker.  If you don't yet know your own abilities, you would prefer more
privacy as an insurance against your own potential deficiencies.  If that
doesn't seem realistic, consider how the argument might apply to your
children.  This line of reasoning also explains why people are troubled by
genetic screening.  Thus privacy might increase social welfare by
providing a sort of social insurance.

Privacy as Restriction on Signaling

"Signaling" is a term used by game theorists to describe the use of
publicly observable actions to provide information to others about one's
private attributes.  The best example comes from biology, where male
peacocks grow extravagant tails to signal their genetic fitness to
females.  Clearly signals must be costly, otherwise they wouldn't be
convincing. They are often also wasteful, as in the peacock example.  (As
a side note, the deposit solution to the junk mail problem I talked about
some days ago is an example of non-wasteful signaling.)  Privacy reduces
the range of actions one can use as signals.  This would increase social
welfare if the wastefulness of the signals exceed the benefit they provide
in the form of useful information.  Consider a possible future where every
room in every house is wired with a camera that continously broadcasts to
the Internet.  Life would certainly be very uncomfortable in this future,
as every trivial action must be carefully considered in order to preserve
one's reputation.

Possible Benefit of Non-Privacy Limited

This is more of an argument for privacy technology, rather than privacy
per se.  Suppose that privacy-invading technology becomes much cheaper
than privacy-enhancing technology.  Given the arguments above it seems
inevitible that governments will pass laws to restrict the distribution of
certain kinds of information about individuals.  But of course this will
not keep the information out of the hands of those governments themselves
and other resourceful organizations.  Thus any possible benefit of
decreased privacy in the form of market efficiency would be severely
limited since only a few market players would have improved information.
This benefit would be easily outweighted by the harm in the form of
governments' increased coercive power.







Thread