From: Adam Back <aba@dcs.ex.ac.uk>
To: elitetek@usa.net
Message Hash: 380d4a85c66cfacc558eaf137b4d58802a2ebcb115287e57d2660762517ee408
Message ID: <199801112144.VAA00285@server.eternity.org>
Reply To: <34B9327A.2AAEEF87@ix.netcom.com>
UTC Datetime: 1998-01-11 23:17:05 UTC
Raw Date: Mon, 12 Jan 1998 07:17:05 +0800
From: Adam Back <aba@dcs.ex.ac.uk>
Date: Mon, 12 Jan 1998 07:17:05 +0800
To: elitetek@usa.net
Subject: Re: (eternity) mailing list and activity
In-Reply-To: <34B9327A.2AAEEF87@ix.netcom.com>
Message-ID: <199801112144.VAA00285@server.eternity.org>
MIME-Version: 1.0
Content-Type: text/plain
[I've Cc'ed this to cypherpunks from discussion on eternity@internexus.net]
Jeff Knox <trax@ix.netcom.com> writes:
> I do have a few questions. I am curios as to how i would go about
> creating a domain service. If i wanted to start a domain service to
> server a domain i create like test.dom, how would i run a server to
> processes the request, and how would all the other servers around
> the world cooperate?
Firstly note that there are 3 (three) systems going by the name of
eternity service at present. Ross Anderson coined the term, and his
paper which describes his eternity service, you should be able to find
somewhere on:
http://www.cl.cam.ac.uk/users/rja14/
Next, my eternity prototype which I put a bit of time into hammering
out last year in perl, borrows Ross's name of "eternity service", but
differs in design, simplifying the design by using USENET as the
distributed database / distribution mechanism rather than constructing
one with purpose designed protocols:
http://www.dcs.ex.ac.uk/~aba/eternity/
And there is Ryan Lackey's Eternity DDS (where DDS stands for I
presume Distributed Data Store?). I am not sure of the details of his
design beyond that he is building a market for CPU and disk space, and
building on top of a an existing database to create a distributed data
base accessible as a virtual web space (amongst other `views'). He
gave this URL in his post earlier today:
http://sof.mit.edu/eternity/
To answer your question as applied to my eternity design, I will
describe how virtual domains are handled in my eternity server design
which is based on USENET as a distributed distribution medium. A
proto-type implementation, and several operational servers are pointed
to at:
http://www.dcs.ex.ac.uk/~aba/eternity/
In this design the virtual eternity domains are not based on DNS.
They are based on hash functions.
Really because it is a new mechanism for accessing information URLs
should perhaps have the form:
eternity://test.dom/
Where a separate distributed domain name lookup database is defined,
and a distributed service is utilised to host the document space.
However because it is more difficult to update browsers to cope with
new services (as far as I know, if anyone knows otherwise, I'd like
pointers on how to do this), I have represented URLs of this form
with:
http://test.dom.eternity/
This would enable you to implement a local, or remote proxy which
fetched eternity web pages, because you can (with netscape at least)
direct proxying on a per domain basis. Using the non-existant top
level domain .eternity, you can therefore redirect all traffic for
that `domain' to a local (or remote) proxy which implements the
distributed database lookup based on URL, and have normal web access
continue to function.
My current implementation is based on CGI binaries, so does not work
directly as an eternity proxy. Rather lookups are of the form:
http://foo.com/cgi-bin/eternity.cgi?url=http://test.dom.eternity/
The virtual eternity URLs (URLs of the form http://*.eternity/*) are
converted into database index values internally by computing the SHA1
hash the URL, for example:
SHA1( http://test.dom.eternity/ ) = d7fa999054ba70e1ed28665938299061b519a4f7
The database is USENET news, and a distributed archive of it; the
database index is stored in the Subject field of the news post.
The implementation optionally keeps a cache of news articles indexed
by this value on eternity servers. If the required article is not in
the cache it is searched for in the news spool.
Currently this is where it stops, so articles would have to be
reposted periodically to remain available if articles are either not
being cached, or are flushed from the cache.
However the cache is never flushed, because the cache replacement
policy is not currently implemented.
An improvement over this would be to add a random cache flushing
policy and have servers serve the contents of their cache to each
other forming a distributed USENET news article cache.
Another option would be to search public access USENET archives such
as dejanews.com for the requested article.
The problem is really that we would prefer not to keep archives of
articles directly on an open access server, because the server's disks
could be seized, and the operator could be held liable for the
presence of controversial materials.
Wei Dai suggested that documents should be secret split in a redundant
fashion so that say 2 of 5 shares are required to reconstruct the
document. If the shares are distributed across different servers,
this ensures that one server does not directly hold the information.
Ross describes ideas on how to ensure that a server would not know
what shares it is holding in his paper which can be found on his web
pages, the url being linked from:
http://www.cl.cam.ac.uk/users/rja14/
Adam
Return to January 1998
Return to “Ryan Lackey <rdl@mit.edu>”