1995-10-31 - Re: /dev/random for FreeBSD [was: Re: /dev/random for Linux]

Header Data

From: “Theodore Ts’o” <tytso@MIT.EDU>
To: Mark Murray <mark@grondar.za>
Message Hash: 3855c8da3708a427f5304a505d3ef8e21e9f3bb9e2fc989af98c5c9826a59d83
Message ID: <9510312008.AA27100@dcl.MIT.EDU>
Reply To: <199510311715.TAA05821@grumble.grondar.za>
UTC Datetime: 1995-10-31 21:48:18 UTC
Raw Date: Wed, 1 Nov 1995 05:48:18 +0800

Raw message

From: "Theodore Ts'o" <tytso@MIT.EDU>
Date: Wed, 1 Nov 1995 05:48:18 +0800
To: Mark Murray <mark@grondar.za>
Subject: Re: /dev/random for FreeBSD [was: Re: /dev/random for Linux]
In-Reply-To: <199510311715.TAA05821@grumble.grondar.za>
Message-ID: <9510312008.AA27100@dcl.MIT.EDU>
MIME-Version: 1.0
Content-Type: text/plain


   Date: Tue, 31 Oct 1995 19:15:35 +0200
   From: Mark Murray <mark@grondar.za>

   >    Is this millisecond accuracy quantifiable in terms of bits of entropy?
   >    if so, the ethernet is surely safe?
   > 
   > Well, no.  If you're only using as your timing the 100Hz clock, the
   > adversary will have a better timebase than you do.  So you may be adding
   > zero or even no bits of entropy which can't be deduced by the adversary.

   In a 386 or a 486 (under FreeBSD at least) there is a 1Mhz clock available.
   How would _this_ be? On the Pentium there is the <whatsit?> register
   which will give the board's oscillator (or 90 MHz) I believe.

What's HZ set at for FreeBSD?  Most of the x86 Unixes have generally
used HZ set at 100, because the interrupt overhead on a x86 isn't cheap,
and so you want to limit the number of clock interrupts.

You can sample the timing clock, but it turns out to be rather expensive
to do so; several I/O instructions, which will require several delays if
they have to go through your 8 MHz ISA bus.  We've moved away from using
the hardware clock on the 386 because of the overhead concerns.  On the
Penitum, we use the clock cycle counter.

   What then is a body to do? Preserve all _verifiable_ randomness like gold?
   Dish it out under some quota? A denial of service attack would be

Well, verifiable randomness really is like gold.  It's a valuable
resource.  On a time-sharing system, where you really want to equitably
share *all* system resources perhaps there should be a quota system
limiting the rate from which a user is allowed to "consume" randomness.

On the other hand, most Unix systems *aren't* great at doing this sort
of resource allocation, and there are enough other ways of launching
denial of service attacks.  "while (1) fork();" will generally bring
most systems to their knees, even in spite of limitations of the number
of processes per user.  Most Unix systems don't protect against one user
grabbing all available virtual memory.  And so on....

   forever {
	   cat /dev/random > /dev/null
   }

   Severely limiting most decent folk's chance at getting PGP to work.

If you have such a "bad user" on your system, and the PGP /dev/random
code is written correctly, it will only be a denial of service attack.
But it'll be possible to identify who the bad user is on your system,
and that person can then be dealt with, just as you would deal with some
user that used up all of the virtual memory on the system trying to
invert a 24x24 matrix, or some such ---- in both scenarios, the ability
for another user to run PGP is severely limited.  

There's nothing special about /dev/random in this sense; it's just
another system resource which can be abused by a malicious user, just
like virtual memory or process table slots.

   > So I like to be conservative and use limits which are imposed by the
   > laws of physics, as opposed to the current level of technology.  Hence,
   > if the packet arrival time can be observed by an outsider, you are at
   > real risk in using the network interrupts as a source of entropy.
   > Perhaps it requires buidling a very complicated model of how your Unix
   > scheduler works, and how much time it takes to process network packets,
   > etc.  ---- but I have to assume that an adversary can very precisely
   > model that, if they were to work hard enough at it.

   This is a strong argument for some form of specialised noise source.
   I have read of methods of getting this from turbulent air flow in a hard
   drive (an RFC, I believe).

Yes, ultimately what you need is a good hardware number generator.
There are many good choices; from radioactive decay, noise diodes, etc.

I'm not entirely comfortable with the proposal of using air flow
turbulance from a hard drive, myself, because the person who suggested
this still hasn't come up with a decent physical model which tells us
how many bits of true entropy this system really provides.  What Don
Davis did was to develop more and more sophisticated models, and
demonstrated that his more sophistcated models weren't able to explain
the "randomness" that he observed in the time that it took to complete
certain disk requests.  However, that doesn't prove that the
"randomness" is really there; it's just that he couldn't explain it
away.  It might be that the NSA has a much better model than Don Davis
was able to come up with, for example, and the amount of randomness from
air turbulance really is a lot less than one might expect at first
glance.

Short of good hardware sources, the other really good choice is
unobservable inputs.  Hence, the Linux driver is hooked into the
keyboard driver, and the various busmice drivers.  Those are really
wonderful sources of randomness, since they're generally not observable
by an adversary, and humans tend to be inherently random.  :-)

						- Ted






Thread