1998-01-16 - Re: mirroring services, web accounts for ecash

Header Data

From: Adam Back <aba@dcs.ex.ac.uk>
To: tcmay@got.net
Message Hash: ff0bf0c77028a58f18dc1336d53a79d4a2e3bd3e1e961a4a93e492da319a24c2
Message ID: <199801160153.BAA00672@server.eternity.org>
Reply To: <v03102805b0e09252f52e@[207.167.93.63]>
UTC Datetime: 1998-01-16 02:04:14 UTC
Raw Date: Fri, 16 Jan 1998 10:04:14 +0800

Raw message

From: Adam Back <aba@dcs.ex.ac.uk>
Date: Fri, 16 Jan 1998 10:04:14 +0800
To: tcmay@got.net
Subject: Re: mirroring services, web accounts for ecash
In-Reply-To: <v03102805b0e09252f52e@[207.167.93.63]>
Message-ID: <199801160153.BAA00672@server.eternity.org>
MIME-Version: 1.0
Content-Type: text/plain




Tim May <tcmay@got.net>
> Here's a meta-question: Suppose one holds highly secret or sensitive data,
> for which one wants to use an Eternity service to ensure the information is
> not suppressed by some government or other actor.
> 
> Why centralize the data at all?
> 
> Why not just use the "pointer" to the data and offer to provide it?
> 
> Which is what Blacknet was all about. Instead of focussing on a data base,
> focus instead on an untraceable market mechanism.

I am more and more seeing the similarities between BlackNet and
Eternity USENET.  The only real difference that I can see is that for
E-USENET I have been talking about periodically broadcasting the data
to allow very high security for the reader, and also the idea of
keeping a local copy of the documents so that data is pre-fetched to
speed up accesses.

Both designs are relying heavily on the anonymity provided by
remailers.  For very high risk traffic, even using mixmaster remailers
may be risky due to the various active attacks which could be mounted
by a well resourced attacker with ability to selectively deny service.

An eternity service or blacknet information provider could frustrate
the active attacker by having many software agents with different
network connectivity and using these resources unpredictably.

> (I admit that a system which can provide *A LOT* of data *VERY
> FAST*, and also untraceably or unstoppably, is an attractive goal.
> [...]  The catch is that I can't see how such a system will get
> built, who will run the nodes, how payment will be made to pay for
> the nodes and work, and how traffic analysis will be defeated.)

One big opportunity we have is to subvert protocols of new services.
A distributed web replacement with ecash payment for page hits I think
is plausible.  Web pages could migrate to meet demand.  You could have
a hot-potatoe effect, where high risk documents are not kept for long
-- bits move faster than government agents and lawyers -- hot data
could migrate every 10 minutes.  

However distributed web replacements are complex to design, and
whether it will be possible to deploy the system widely is an open
question.

> And I think implementing the slower-but-no-breakthroughs approach (Blacknet
> or variations) has some advantages. It may be many years before we need to
> be in the corner of the graph that is "large amounts of data--very fast
> retrieval--very secure."
>
> Most candidates for untraceable/secure storage and retrieval are NOT in
> this corner, yet. (Kiddie porn may be, but whistleblowing and scientific
> information are not.)

What about large scale software piracy.  This could consume serious
amounts of bandwidth.  This seems to be intermediate risk in that if
one observes even 30 seconds of traffic in #warez, one observes lots
of commercial software trading hands.

Perhaps the new draconian US software copyright law which the large
software corps purchased from the politicians will move software
piracy towards the higher risk end.

Would the world be better off without software copyright?  I tend to
think so.

Adam






Thread