From: Scott Brickner <sjb@austin.ibm.com>
To: don@cs.byu.edu
Message Hash: 4a675293882eed3d65318cbfc391e5fb0b2ac5b49dbe9585dbe7833122a9fd3d
Message ID: <9508300101.AA11637@ozymandias.austin.ibm.com>
Reply To: <199508292102.PAA01897@wero>
UTC Datetime: 1995-08-30 01:01:35 UTC
Raw Date: Tue, 29 Aug 95 18:01:35 PDT
From: Scott Brickner <sjb@austin.ibm.com>
Date: Tue, 29 Aug 95 18:01:35 PDT
To: don@cs.byu.edu
Subject: Re: SSL search attacks
In-Reply-To: <199508292102.PAA01897@wero>
Message-ID: <9508300101.AA11637@ozymandias.austin.ibm.com>
MIME-Version: 1.0
Content-Type: text/plain
don@cs.byu.edu writes
>From: Scott Brickner <sjb@austin.ibm.com>
>
>>The problems in the current system were to be expected of a first
>>attempt. In the future: Only the server assigns segments, only the
>>assignee may report the status of a segment, and after all segments are
>>NAKed we know condition 1 has occurred, at which time we start over,
>>but never assign the same segment to the same searcher. Limit the
>>number of segments which may be outstanding with one searcher at one
>>time as a function of work rate. Deploy redundant servers.
>
>BEAAAT STATE! Push 'em back.. WAAAAAAY BAAAACK.
>(relevant comments follow)
I suppose this does seem like a "statist" protocol, but let's look at
the purpose. The whole idea of the central server was to permit a
*coordinated* attack on the key. We've established that there is a 1/e
cost factor in removing the central server. I just threw out these
items as specific changes which could defend against the identified
attack modes *without* losing the benefit of the central coordination.
In order for the coordinator to be successful, there must be a
mechanism to ensure that someone who knows the key can't break the
system by just reporting "I searched this segment and didn't find it."
This means that the server should consider such statements as
irrelevant, unless it was the *server* who suggested that the user
search the space. This makes the likelihood of the key's segment being
assigned to a "bad guy" pretty low.
The server *could* take unsolicited NAKs "under advisement", and hand
them out at a slower rate than unACKed segments, but this still allows
the "result hoarder" to slow down the attack.
>In regard to both messages, I think that with sequentially allocated
>keyspace an attacker who knows the real key would have trouble getting the
>right segment unless s/he grabbed a big enough piece. If the search is
>restarted, we know something's up. Ensuring that nobody gets to search
>keyspace they searched before would be one improvement.
Hence the prohibition against (as Tim put it) "J. Random User claiming
keyspace".
>A random (instead
>of sequential) allocation _by the keyserver_ (out of unallocated
>piecemeal segments) would also take some work to implement.
I don't think it would really be that hard, if one were willing to go
with less than "cryptographic" strength in the PRNG, which I don't think
is really necessary here.
The problem is that it's irrelevant to the problem. Random allocation
at the server is equivalent to simply "shuffling" the segments before
assignment, which doesn't affect the rate at which the space is searched.
>From: tcmay@got.net (Timothy C. May)
>>On the negative side, ostracize or punish those who bite off more than they
>>can chew. This approach is fraught with dangers.
>
>If the search wraps around to catch the UNACK'ed pieces, this type of
>oversight will only slow down the actual discovery of the key. Failure
>to report a found key, though, is a bit different. I would not be opposed
>to having my program report possible hits, with the server being what
>discovers if I've found it or not.
I'm not sure I follow you, here. The search wraps around on the unACKed
segments because the work was assigned, but not (as far as the server
knows) completed. This doesn't slow down the discovery of the key,
it just reflects the *real* composite key testing rate as opposed to
the *apparent* rate (which is based on the rate at which the segments
are assigned). The server doesn't consider a segement "done" until it
gets an ACK or NAK.
>>On the positive side, let everyone simply attack the keyspace as they see
>>fit, picking random parts to attack. This should not be "worse" than a
>>factor of several from a "perfectly coordinated" attack. (I haven't spent
>>time calculating this, but my intuition is that a random attack, with
>>overlapping keyspace, is not a lot less efficiently attacked than
>>attempting to arrange for no overlaps...just based on my mental picture of
>>dropping line segments randomly on some interval and figuring coverage of
>>the line segment.)
NB: Elsewhere, Tim provides an argument showing the efficiency of the
random attack to be 1/e worse than the coordinated attack (about 37%).
>Why not have a random backup-mode, in case someone does mount a denial of
>service attack. Or imploy a combination of the two modes. The machines
>running brloop can search sequentially (out of the middle 50%?) and the
>machines not connected search randomly (out of the outside 50%?). Or,
>venturing further into the I-wonder-who's-gonna-code-this world, log the
>random searches for possible conversion to an exhaustive search later.
>
>It would be nice to be able to hit the emergency button and switch to
>random mode, but currently I don't think there's a need to actually
>use it.
I still don't see how the server can use unsolicited NAKs as anything
other than a nominal reduction in the probability that the key is in
the NAKed segment. Perhaps this does give an idea of a server strategy
to do *just* that, though.
The server maintains a list of the unique users who have reported an
unsolicited NAK for each segment. Requests for work are filled by
randomly selecting segments, with the highest weight going to the
segments with the fewest unsolicited NAKs, but only segments with
*solicited* NAKs and those assigned, but with no response, are not
considered.
If the weight were inversely proportional to the square of the number
of unsolicited NAKs (plus one), then segments which have a lot of NAKs
won't likely be assigned until the end of the jobs.
When a segment with unsolicited NAKs is assigned, further weight might
be given to unsolicited NAKs from those users in the future, reflecting
an improvement in their reputation.
The biggest problem with this scenario is that it requires a
potentially *huge* amount of storage on the server.
Another alternative that comes to mind is to hand out segments with
unsolicited NAKs to some of the slower machines. Since their
contribution to the overall search rate is small, there's less of a hit
taken by assigning them potentially redundant work. As they provide
verification of the data reported as unsolicited NAKs, the server's
reputation data is improved, and the search can concentrate even more
on the unACKed segments.
Return to September 1995
Return to “Scott Brickner <sjb@austin.ibm.com>”