1995-08-28 - Re: SSL trouble

Header Data

From: Damien.Doligez@inria.fr (Damien Doligez)
To: cypherpunks@toad.com
Message Hash: a8037b18037c853bf93ae1917fbae3f8ae0735e4a74da2e198dc189713c422e5
Message ID: <9508281310.AA21354@couchey.inria.fr>
Reply To: N/A
UTC Datetime: 1995-08-28 13:11:46 UTC
Raw Date: Mon, 28 Aug 95 06:11:46 PDT

Raw message

From: Damien.Doligez@inria.fr (Damien Doligez)
Date: Mon, 28 Aug 95 06:11:46 PDT
To: cypherpunks@toad.com
Subject: Re: SSL trouble
Message-ID: <9508281310.AA21354@couchey.inria.fr>
MIME-Version: 1.0
Content-Type: text/plain


>From: Piete Brooks <Piete.Brooks@cl.cam.ac.uk>

>We were using ALPHA code when we started ....

I didn't realise that.


>(4) is still applicable isn't it ?
>What tells people to stop, or do they go on for ever ?

A message in a newsgroup, a mailing list, or a web page.  Even if you
can mount a denial-of-service against this, it will only make people
continue the search uselessly.  It won't prevent you from finding the
key.


>>The main drawback of the random search is that the expected running "time"

>where "expected" is some loose average .....

Nope.  It's what I get when I do the math (basic probability theory)
to find the expected running time.  But I could be wrong.  I'll try to
write it in TeX and put it on my web page.


>>I suspect that sequential searching from a random starting point would be
>>much worse in the case of many independent searchers.

>Convince me (please) ....

That would be hard because I've been thinking about it, and I'm less
and less convinced myself.


>> In conclusion, I think random searching is the way to go.
>It has its advantages -- yes. Did you use it for Hal1 ?  :-))

No, but I had few machines and fast connections (and even then, I did
have some network problems).  But if you think sequential searching
can work, let's do it.  I don't think we have to worry about
deliberate attacks for the moment, and the factor of two is
significant.  My previous message was based on the assumption that it
would be hard to get rid of the server overload.

Maybe we should use random searching as a fallback mode in case of
network problems.  It cannot hurt, except that it makes the programs
more complex.

-- Damien





Thread