1995-09-01 - Re: SSL search attack

Header Data

From: rkw@dataplex.net (Richard Wackerbarth)
To: Piete Brooks <Piete.Brooks@cl.cam.ac.uk>
Message Hash: e03e060697f6846296d9e779bcd9275b72679a648f2b07ffc0e198315f0544e4
Message ID: <v02130502ac6cddb40cbe@[199.183.109.242]>
Reply To: N/A
UTC Datetime: 1995-09-01 19:47:20 UTC
Raw Date: Fri, 1 Sep 95 12:47:20 PDT

Raw message

From: rkw@dataplex.net (Richard Wackerbarth)
Date: Fri, 1 Sep 95 12:47:20 PDT
To: Piete Brooks <Piete.Brooks@cl.cam.ac.uk>
Subject: Re: SSL search attack
Message-ID: <v02130502ac6cddb40cbe@[199.183.109.242]>
MIME-Version: 1.0
Content-Type: text/plain


I wrote
>>> Assuming that you are multi-threaded--- Simply run two "workers" on the
>>> same machine. If there are delays in getting keys assigned, the two will
>>> soon get out of phase and keep the cpu busy.
>> I kind of like that idea...
>
To which Piete Brooks replied:
>I thought of that, but:
>1) for the same server load, it doubles the number of unACKed segments
>2) if process A is lagging process B, then when process B finishes and is idle
>   waiting for the server, process A will run faster and thus reduce the lag.
>   This will make the processes drift into phase.
>   I'm not convinced one way or the other.

But you forgot that when process A finishes, process B will run faster and
re-establish its lead.

The real question is what is the parameter that we need to minimize?

Assuming that the key is distributed in the keyspace with a uniform
probability, then what we need to minimize is the expectation that two or
more workers are searching the same keyspace.

As long as we never reach the point that all of the keys have been
distributed, it does not matter how many or in what method they are
assigned. (The "fairness" WRT a prize being ignored)

The assignments only become important as we exhaust the space and must
prepare to make another pass.
Note that we never got to that point on challenge 2. The assignment of the
block containing the key was processed on its first pass and the key was
found. In this regard, it is probably "best" to first attempt to identify
those space assignments that have been lost. If we associate with each key,
either explicitly, or by inference, an expected completion time, those
segments which are most overdue are certainly good candidates for having
been lost.

Based on our previous try, and the assumption that we would not have
extremely different resources available, the master allocator would not
NEED to get reports back for the first say 12 hours.
That is not to imply that reports should be delayed that long, but only
that there is considerable opportunity to have a hierarcy of intermediate
collectors that have plenty of time to adjust their allocation algorithms
to match the ability of their workers. Later, more rapid response would be
needed. When the required response becomes too small for the "little guys",
they could be sloughed off on the next problem, leaving the "big boys" to
clean up the last pieces.

Of course, the "next" problem might be to resolve the same problem because
the correct answer was incorrectly reported as not found.

As I see it, except for perhaps the fastest of machines there is little
reason to allocate to the workers more than one segment at a time. Their
supervisor can quickly respond to requests for work and consolidate the
results to be passed up the chain. The only reason that I can see to
separate the acks from the assignments is to be able to have "memory-less"
nodes. This is certainly unnecessary if there is a web of supervisor
servers.

I have a lot more thoughts that I will defer to the next missive.

Gotta' run...

----
Richard Wackerbarth
rkw@dataplex.net







Thread