1996-04-15 - Re: RSA-130 Falls to NFS - Lenstra Posting to sci.crypt.research

Header Data

From: “Vladimir Z. Nuri” <vznuri@netcom.com>
To: Bill Stewart <stewarts@ix.netcom.com>
Message Hash: 76a272f1c40bdfd9077820c266ca5b196e2b9d6c3fc7c1a2a715ac457d8fc4af
Message ID: <199604151824.LAA07600@netcom6.netcom.com>
Reply To: <199604150519.WAA09619@toad.com>
UTC Datetime: 1996-04-15 23:27:01 UTC
Raw Date: Tue, 16 Apr 1996 07:27:01 +0800

Raw message

From: "Vladimir Z. Nuri" <vznuri@netcom.com>
Date: Tue, 16 Apr 1996 07:27:01 +0800
To: Bill Stewart <stewarts@ix.netcom.com>
Subject: Re: RSA-130 Falls to NFS - Lenstra Posting to sci.crypt.research
In-Reply-To: <199604150519.WAA09619@toad.com>
Message-ID: <199604151824.LAA07600@netcom6.netcom.com>
MIME-Version: 1.0
Content-Type: text/plain



regarding these collaborative, "open" factorizations and cracking
projects:

I have been wondering about malicious hackers getting into these
pools. would it be possible for them to contribute false data
that screws up the end results? or are such anomalies easily
discarded or disregarded by the final processes?

there is a reduction step in the NFS (number field sieve, technique
used to factor large numbers) in which all the collected data is mashed.
how sensitive is this process to spurious data? i.e. if there
was a little bit of bad data in its computation, does it completely
screw it up, or is it robust and resistant to this kind of problem?

it seems to me that in many cases, these collaborative projects
virtually cannot check the validity of the supplied data without
repeating the computation effort, although there may be good
tests that tend to screen out "most" bad data. 

future implementors of these programs might amuse themselves with
trying to create such safeguards or anticipate such "attacks" which
are pretty significant the more the processes become distributed.






Thread