From: jim bell <jimbell@pacifier.com>
To: cypherpunks@toad.com
Message Hash: cdd52c27821bde20813445d8f3c0849d35e869f2a1ad6cdf1979c24bd869d0b2
Message ID: <m0uCa4r-00094aC@pacifier.com>
Reply To: N/A
UTC Datetime: 1996-04-25 23:12:50 UTC
Raw Date: Thu, 25 Apr 1996 16:12:50 -0700 (PDT)
From: jim bell <jimbell@pacifier.com>
Date: Thu, 25 Apr 1996 16:12:50 -0700 (PDT)
To: cypherpunks@toad.com
Subject: Re: trusting the processor chip
Message-ID: <m0uCa4r-00094aC@pacifier.com>
MIME-Version: 1.0
Content-Type: text/plain
At 01:53 PM 4/25/96 -0400, Jeffrey C. Flynn wrote:
>I received several responses to this question. My favorite was as follows...
>
>>This is probably science fiction, particularly at the VHDL level.
>>Maybe someone could make a crime of opportunity out of a microcode
>>flaw, but there's a risk of it being found out during testing.
>>
>>To do it right would require collusion of the design and test teams.
>>They need to ensure the back door stays closed, isn't tickled by
>>"normal" testing and only opens when really requested. So a lot of
>>people are in on the secret even before it gets exploited for
>>nefarious purposes.
>>
>>And what nefarious purposes would pay for the risks and costs of this?
>>If the secret got out, the design team, product line, and company
>>would be dead in the marketplace and probably spend the rest of their
>>lives responding to lawsuits. What could you use this for that is
>>worth the risk?
This analysis seems to assume that the entire production run of a standard
product is subverted. More likely,I think, an organization like the NSA
might build a pin-compatible version of an existing, commonly-used product
like a keyboard encoder chip that is designed to transmit (by RFI signals)
the contents of what is typed at the keyboard. It's simple, it's hard to
detect, and it gets what they want.
Jim Bell
jimbell@pacifier.com
Return to April 1996
Return to “Snow <snow@crash.suba.com>”