1996-05-08 - Security Scruffies vs Neats, revisited

Header Data

From: smith@sctc.com (Rick Smith)
To: cypherpunks@toad.com
Message Hash: 5dbb0a05a6ed1a222e44aabd8b07cba456a9e288a08908c7a97e5227348eeb4c
Message ID: <v01540b02adb5a95e54e1@[172.17.1.61]>
Reply To: N/A
UTC Datetime: 1996-05-08 05:07:33 UTC
Raw Date: Wed, 8 May 1996 13:07:33 +0800

Raw message

From: smith@sctc.com (Rick Smith)
Date: Wed, 8 May 1996 13:07:33 +0800
To: cypherpunks@toad.com
Subject: Security Scruffies vs Neats, revisited
Message-ID: <v01540b02adb5a95e54e1@[172.17.1.61]>
MIME-Version: 1.0
Content-Type: text/plain


This is an attempt to restart the discussion in a slightly different direction.

I've been giving the topic some thought since Tim's truncated essay
appeared. But when I re-read it just now, I realized that I read in my own
interpretation of "scruffy" and "neat" to this.

IMHO, the critical property of AI scruffies is that they believe in the
value of some notion of emergent behavior -- if you build it right, it'll
surprise you and do something clever and unexpected to fulfill its
objectives. The "neats" have to know exactly why the behavior emerged, but
the scruffy methodology almost never allows such a detailed analysis to
succeed.

Intuitively, I tend to think of scruffies as trying to build biological
processes or concepts into computers. The goal seeking built into IP
packets, for instance. The Internet is an impossible artifact, if you view
distributed computing with '70s blinders. Nobody would want to cede control
so much to largely autonomous routers. Once you drop an IP packet into the
"system" it generally gets to its destination or dies of old age trying.

When I try to apply this style of thinking to security, I find myself going
towards layered defenses. These goal seeking, semi biological processes are
somewhat failure prone, so you probably need a set of them to make things
"safe." Falling back to biology, we see "security" in the various defensive
mechanisms developed in plants and animals.

But now things start to break down. "Security" these days means more than
defense -- it means access control. "Let me in" as well as "Keep them out."
How do you "tune" or "train" a semi-biological mechanism to exert such fine
control? It's not clear to me that you can. When I read Kevin Kelley's book
"Out Of Control" I kept wondering who wanted to live with his semi-biological
toasters and heating plants, tolerating burned toast and frozen bathrooms
until the devices finally "learned" how to behave. (but I shouldn't get
started on that book -- I once wrote 20 pages of notes about how bogus I
thought it was).

In other words, the problem may be with the concept of security itself.
Defense seems to be a biological concept, but security is not. It's too
artificial, involving the reflection of some abstract and arbitrary human
intent. Constructing a subsumption device to collect pop cans is one thing,
but building one to construct a cuckoo clock (or play doorman) is something
else.

Rick.







Thread