1994-03-04 - Security through Obscurity

Header Data

From: Hal <hfinney@shell.portal.com>
To: cypherpunks@toad.com
Message Hash: a53440fb146ce0adc560b66bb6f737c5c6117623a1ac1c9186b884b2cc63ccb2
Message ID: <199403040657.WAA02068@jobe.shell.portal.com>
Reply To: N/A
UTC Datetime: 1994-03-04 06:56:55 UTC
Raw Date: Thu, 3 Mar 94 22:56:55 PST

Raw message

From: Hal <hfinney@shell.portal.com>
Date: Thu, 3 Mar 94 22:56:55 PST
To: cypherpunks@toad.com
Subject: Security through Obscurity
Message-ID: <199403040657.WAA02068@jobe.shell.portal.com>
MIME-Version: 1.0
Content-Type: text/plain


Security through Obscurity

Here's my view of the problems with the security through obscurity
approach.  First I'll discuss encryption, then steganography.  I use
StO to mean "Security through Obscurity".

It's true that obscurity can't hurt and might help.  If you can not only
keep your key secret, but your algorithm as well, then the attacker will
have a much harder time breaking your encryption.  And traditionally this
has been done.  I understand that much of the work in breaking the codes
during WWII was involved in finding out the algorithm; once that was done
then finding the keys was a considerably smaller problem.

I think the the "No StO" maxim refers to a design methodology for
the creation of cryptographic algorithms.  In this technique, you
divide the algorithm into those parts which must be kept secret, and
those which don't have to be.  The parts you keep secret you call the
key, and you accept that you will have to take extreme measures to
protect those secrets.  The other parts are less protected.

In other words, you conceptually draw a line between those parts which
have to be protected at all costs, and those which don't.  You then
analyze the algorithm's strength on the assumption that the secret
parts are kept secret.  You also carry out the analysis on the assumption
that the non-secret parts fall into enemy hands.  In the end, an algorithm
is judged on this basis.

In the context of this design technique, StO would refer to the hope that
the non-secret parts are also kept from enemy hands.  While this may be
desirable and beneficial, it breaks the rules of the method.

The advantage of this method is that it allows you to do a clean cost
versus benefit analysis.  You calculate the cost in terms of what it takes
to keep the keys secret, and you calculate the benefits in terms of how
much security you gain if you keep the keys, and only the keys, secret.

To also give credit for the additional security of keeping the non-key
portions secret, you would also need to calculate the costs of keeping
those parts secret.  Since historically it has been very difficult to keep
all parts of a cryptographic method secret, one has to consider these costs
to be very high.  Avoiding StO means avoiding falling into the trap of
counting the benefits of keeping the non-key parts secret without counting
the costs.

In this light, there is no inherent violation of the NoStO principle in
a cryptographic system which keeps the algorithm secret.  It simply means
that the algorithm has to be considered as secret as the key, and protected
just as securely as the key is protected.  In many circumstances this would
be excessively costly but in some limited situations it may be practical.
As long as you fully recognize that this line between the secret and the
non-secret portions is drawn to put the algorithm on the "secret" side,
you are properly avoiding StO.

In the context of commercial or public-domain cryptographic algorithms,
it is basically impossible to keep algorithms secret.  That is why any
cryptosystem of this nature which relies on a secret algorithm is scorned
as violating the NoStO principle.  It is generally not practical to expect
to keep a secret which is made widely available.

To sum up, obscurity is not bad.  What is bad is to confuse obscurity
with security.

Now, in the context of steganography, we should make clear what problem
we are trying to solve.  There are several components to this problem,
but I will focus just on the last step: hiding one bit pattern in
another.  Generally we do this by replacing some of the bits in the
target data with bits from the data we are hiding.

In encryption, the opponent's desire is to find out the original message.
What is the opponent's desire in steganography?  I feel it is to be able
to prove or determine with some degree of certaintly that there is a
hidden message.  We use steganography in a context where sending such a
message openly is for some reason undesirable.  Hence our goal is to
prevent the opponent from knowing that a message exists.

A test, then, for the success of a steganographic technique is this:
given some sampling of data items, half of which have embedded hidden
messages, can the opponent guess which ones have such messages with
better than 50% accuracy?  If not, the steganography is fully successful.
If he can do slightly better than 50%, it may still be useful depending
on the situation.  If he can guess with 100% accuracy, the steganography
has failed and is totally worthless.

Now, how does the NoStO maxim guide our attempts to evaluate steganographic
algorithms?  Again, the basic principle would be a need to separate that
which would be kept secret from that which would be publicly known.  Any
system which relies on keeping secret some information which must be
widely disseminated is not correctly accounting for costs when it touts
its benefits.

In the systems we have been discussing for a layered approach to stega-
nography, the actual embedding step has no secret component.  Rather,
the message is first encrypted and possibly transformed in such a way
that it is statistically identical to the bits which it is replacing.
The actual steganographic step simply does the replacement.

In this layered approach, there is no provision for key information to be
used in steganography.  Rather, the receiver of the message has only
publicly available data.  This means that when we "draw our line" we
exclude nothing from the knowledge of our opponent.  In counting the
benefits of the steganographic algorithm we assume that the opponent
will use exactly the same technique to de-steganize the message as our
intended recipient will.

Therefore, we are forced to assume that the opponent can successfully
extract the hidden message.  Now, the question that he must still answer
is, is this in fact a message or is it just random noise?  In order to meet
the goal above of making such a guess impossible with better than 50-50
chances, it follows that the message must appear identical to random
noise.  Any pattern in the message, such as a plaintext header, will make
the steganography useless.

This is also why proposals to scramble or permute the bits as they go
into the data, or to use a special offset instead of the beginning of
the data (then wrapping the bits around when we come to the end) do not
fundamentally help the situation.  By the basic premise above, we assume
that the opponent will be able to undo such artifices just as the
intended recipient will.  This way, again, we count our costs and benefits
on fair grounds.

Now, it is true that this is assuming that there is no "key" information
used in the steganography.  The NoStO principle would lead us to
investigate keyed steganography, where the receiver has specific secret
information which the opponent would not have.  But if we are going to
do this, we have to accept the costs.  That key must be kept just as
secret as the keys in an encryption system.  We can't just let it be
something obscure like a checksum based on a public key, information which
the opponent will have as well.  It has to be *secret*.  That is what
NoStO tells us.  If we want the benefit of a key, we have to pay the cost.

It's not clear whether keyed steganography has any benefits over the
unkeyed system discussed above which is used as part of a chain which
includes (presumably keyed!) encryption.  It would seem that the stego
would still have to match the statistics of the bits being replaced,
and if you can do that then the unkeyed approach would work.  But perhaps
there are useful solutions along these lines.

The important point, again, is that if you want a secret, you have to
keep it secret.  Looking at the advantages of a system which benefits if
some information is withheld from the opponent without calculating the
costs of actually keeping that information secret is the foolhardy
behavior which the NoStO principle warns against.

Hal Finney
hfinney@shell.portal.com





Thread