1997-01-02 - subliminal channels and software failure modes

Header Data

From: David Molnar <bigdaddy@shell.skylink.net>
To: cypherpunks@toad.com
Message Hash: b45475a8b15e1aa721725d2d225088a252118fb7704085b95861f7bf7bacc1a3
Message ID: <Pine.SUN.3.91.970102013835.2511A-100000@shell.skylink.net>
Reply To: N/A
UTC Datetime: 1997-01-02 10:05:54 UTC
Raw Date: Thu, 2 Jan 1997 02:05:54 -0800 (PST)

Raw message

From: David Molnar <bigdaddy@shell.skylink.net>
Date: Thu, 2 Jan 1997 02:05:54 -0800 (PST)
To: cypherpunks@toad.com
Subject: subliminal channels and software failure modes
Message-ID: <Pine.SUN.3.91.970102013835.2511A-100000@shell.skylink.net>
MIME-Version: 1.0
Content-Type: text/plain


Just a thought I had the other day. Probably unconciously plagarized, but 
bear with me.

What about using subliminal channels as a tool to signal software failure? 

That is, suppose we define some kind of condition in which the software 
could continue to work, but should not. In addition, simple cessation of 
function is not possible, or not advisable. For examples, all that comes to 
mind off the top of my head is "stolen" software...though perhaps one 
might use subliminal channels for diagnostic equipment if competitors are 
assumed to be listening in? 

When such a condition is met, the software modifies its output (which 
should be signed w/something which has a nice, big subliminal 
channel...SHA?) to signal the condition and the particulars. After 
modifying itself to produce the altered output, it deletes the code 
responsible for the modification. Unless caught in the act, or compared 
to a legitimate copy, the application appears no different than before.

I was thinking in terms of crypto (or other) software that attempts to 
personalize itself to a particular machine. If someone steals the HD or 
grabs the keys and program, their output will be 'tainted', alerting 
legitimate users to the theft. Hardware disconnected from its normal 
environment might use such a channel to indicate its 'stolen' or 
'temporarily down - come fix' status. 

This is security through obscurity...the chances of it working are about 
the chance that no one notices the change or finds the code responsible. 
I suppose the software  industry (and the pirates) will be too happy to 
provide examples of many attempts to use such schemes. For this reason, I 
would only ask if it makes sense for limited distributions of 
software or hardware products. Is this kind of system already in use? 

Any ideas on making it more applicable to general distribution, or has 
this already been tried and discarded?

David Molnar






Thread