From: Loren James Rittle <rittle@comm.mot.com>
To: cypherpunks@toad.com
Message Hash: 212f1fdcfbeddceec11f5854f066b26dc7d2bd1578cd69328b64517a900f1e76
Message ID: <9601030920.AA05852@supra.comm.mot.com>
Reply To: N/A
UTC Datetime: 1996-01-03 09:33:08 UTC
Raw Date: Wed, 3 Jan 1996 17:33:08 +0800
From: Loren James Rittle <rittle@comm.mot.com>
Date: Wed, 3 Jan 1996 17:33:08 +0800
To: cypherpunks@toad.com
Subject: Re: NYT's _Unmuzzling the Internet_
Message-ID: <9601030920.AA05852@supra.comm.mot.com>
MIME-Version: 1.0
Content-Type: text/plain
-----BEGIN PGP SIGNED MESSAGE-----
Jaron Lanier wrote in the _The New York Times_, January 2, 1996, p. A15:
``The other day, I came up with a way to easily evade the
proposed American restrictions. My simple idea would be to
create a computer program, dubbed `Unmuzzle,' which would
deposit incomprehensible fragments of any forbidden
material in different foreign computers (though maybe not
Germany's). The contraband communication would only be
reassembled into a coherent whole when downloaded in the
home of the user back in the United States, where it would
become protected speech, as in any other medium.''
Is this the state to which the Internet must evolve to withstand
attack from the possible near-future legislation contained within the
current draft of the Telecommunications Reregulation Bill? The
Internet technology that was designed to withstand network outages by
routing around the problem must now, perhaps, also be designed to
allow information to be split for storage and transmission to navigate
around mere political insanity. I know: "Cypherpunks write code!",
but something seems amiss with the technical solution proposed in the
opinion editorial quoted above. Many questions are begged in my mind.
At first, the Jaron proposal sounds like an interesting thought
experiment but a total waste of bandwidth, both CPU and network, to
me. The unconstitutional Bill must be defeated in Congress, by that
Presidential veto pen that Clinton has become so fond of using
recently or the Court system, if absolutely necessary. If none of
that happens, then surely technology can be used to route around this
"political" problem. It just seems like a shame to have to expend
technical effort and valuable network resources to play games to meet
the letter of a law, which would so clearly break the direct spirit of
the Constitution, if signed into Law and later found during a Supreme
Court battle to "pass constitutional muster," as they like to say.
Under my model, which may be different than Jaron's, I assume the raw
data is useless without a recipe, or algorithm, if you prefer. Jaron
doesn't say how the ``incomprehensible fragments of any forbidden
material'' are known to be joinable and how they are to be joined so I
invented this as the missing glue to discuss his idea in this forum.
I assume a recipe would be a new base item fetchable via a standard
URL. It would disclose the location of raw data sets, how they should
be joined and the resultant data-type of the information, if the
recipe were to be followed. In this way, it might be possible to work
a decoder directly into Mosaic/NetScape/HotJava/<name your favorite
WWW browser here>. (Perhaps a self-imposed rating could be included
within the recipe as additional information bits. Or, perhaps the
recipe could be signed by one or more reviewers, which may be trusted
by end-users. These features are mentioned only as side features,
they do not affect the basic operation to circumvent the letter of the
proposed Law.
Back to the questions begged and partial solutions.
For instance, if one provides, in a distributed fashion, data sets ---
which taken apart are not indecent in anyone's mind since they appear
completely random --- and a recipe to generate information from the
data sets --- which may construct something which might be considered
indecent --- does anyone violate any portion of the insane Indecent
Bill, if passed by Congress and signed into Indecent Law by the
President? Does the person who set up the information split get in
trouble? Do the people pulling in recipes and various piece of
random-looking data sets get in trouble? Do the data set warehousers
get in trouble, even if they could have had no direct way to know the
raw pieces of data that they stored were to something eventually seen
to be indecent when a recipe was followed. Do the recipe warehousers
get in trouble, since they could have known what might be created if
all data sets were obtained and joined as proscribed by the recipe?
What if end-user client software was taught to do all the steps
required to follow a recipe automatically? Same as last question,
except the user was explicitly asked before any recipe was followed to
completion? I think that the Court would be hard-pressed to find a
difference between distribution of something indecent and a recipe
known to create something indecent from raw data. But, what if recipes
were used for everything, not just items thought to be borderline
indecent to totally obscene. Under this assumption, if it could
be shown that a recipe and raw data warehousers had no knowledge
of each other's contents, they could do no self-policing. It appears
that raw data warehousers have "no knowledge" of recipe warehousers
as long as the raw data contains no reference to the recipe. The
recipe warehousers appear to have no such luck since they contain
URLs that point to the raw data chunks required to form coherent
information. Recipe warehousers could follow the recipe to "check"
content.
Finally, on a different tangent, why do the raw data pieces have to be
stored on different machines in different countries, if by themselves
they are unreadable? Since I believe it is the recipe, not the
contributing raw data that presents a problem, it seems like this
must be the piece to be stored external to the U.S.
For example, only the recipe need be stored abroad in a nice little
computer in the Netherlands. Assuming the recipe included only
URL-style pointers to the data sets' distributed location and mixing
method, a recipe should be quite small. Imagine the Government trying
to explain to a jury that random looking transmissions taken together
in some exotic manner --- as described by a file fetched from outside
the U.S. --- equals some filthy text or image or some other unpopular
political speech. Using these rules, I could probably find three
passages of text in the 100,000's of pages composing the U.S. Code
that when XOR'd together generate something obscene.
To make the Government's job even harder before a jury, what if the
recipe to be fetched from the foreign country always generated the
First Amendment text when followed directly. Imagine the Government's
surprise when the Defense later shows a recipe involving the exact
same information sets that, perhaps, yields the text of the First
Amendment, The Indecent Bill itself or another interesting historical
document. What if certain implementations of software that decode
these recipes could infer another recipe implicitly encoded within the
fetched data sets which were required to follow the explicitly given
recipe. Since the information required to regenerate the First
Amendment text will have always been pulled, in its entirety, an
external observer must concluded that the receiver might have plainly
followed the directions in the recipe leading to its generation
instead of any hidden inferred recipe for the questionably indecent
text or image.
That sounds like reasonable doubt to me, regardless of the facts of
the case. The Defense can always argue that the client was just
trying to express the First Amendment in a novel manner, which
happens to be true in more ways than one in this case. :-)
The Jaron proposal does have some major benefits at least as I have
framed the idea. These need to be mentioned explicitly, in case the
important side goal was too subtle expressed above. I reverse the
location of the bulk of the data required to store the real
information. The recipe, which is assumed to be small with respects
to the size of the raw data, is stored in any Internet friendly
location (i.e. most of the world except the U.S. after the CDA
passes) and pulled into the U.S. as required. The raw data is stored
within the U.S., randomly spread between data set servers. When
arranged in this manner, the bulk of the data continues to be stored
as it would have been before stupid U.S. regulations took affect.
This final analysis might sound U.S. centric. It was not meant to be.
I assume that any information replication scheme that might have been
used could continue to be used. For example, one recipe might exist
for each regional replication that existed. Hopefully, the recipes
themselves would be replicated in many Internet friendly locations.
I welcome informed legal comments on this modified proposal.
Regards,
Loren
-----BEGIN PGP SIGNATURE-----
Version: 2.6.2
iQCVAwUBMOpKUv8de8m5izJJAQH+cgP+MDO6TK5s1MkkiWcvSKP9wwoVn0VqMM+U
hPRGQJ2MjL3s7r9mPTqlbnPOllI4FO6rBQt5vqmzMnemFG1k94REvmGHuSMxZ7xV
zoqYcvZzxdG2KwKBiLWiilirA0IrDV1MQJ4i7xMYYdOoOoeN1VnUbgHW9iWquwKT
tIpWzbFFGO0=
=m0bM
-----END PGP SIGNATURE-----
Return to January 1996
Return to ““Vladimir Z. Nuri” <vznuri@netcom.com>”