1996-12-13 - Re: In Defense of Anecdotal Evidence

Header Data

From: nobody@huge.cajones.com (Huge Cajones Remailer)
To: cypherpunks@toad.com
Message Hash: 3fbfb48e2c56c6d6cb871d166099ae79907d6dbeaa13dda169bdf915d2a8f538
Message ID: <199612132226.OAA13677@mailmasher.com>
Reply To: N/A
UTC Datetime: 1996-12-13 22:26:22 UTC
Raw Date: Fri, 13 Dec 1996 14:26:22 -0800 (PST)

Raw message

From: nobody@huge.cajones.com (Huge Cajones Remailer)
Date: Fri, 13 Dec 1996 14:26:22 -0800 (PST)
To: cypherpunks@toad.com
Subject: Re: In Defense of Anecdotal Evidence
Message-ID: <199612132226.OAA13677@mailmasher.com>
MIME-Version: 1.0
Content-Type: text/plain



Hats off to Rob Carlson for a great article!

At 9:54 AM 12/13/1996, Rob Carlson wrote:
>On Thu, 12 Dec 1996 14:12:23 -0800, Huge Cajones Remailer wrote:
>>Statistics are a useful tool, but they have their problems.  Their
>>accuracy is often in doubt.  Most scientific data comes with an
>>error analysis so you can tell what the figure means.  For some
>>reason statisticians never do this so we cannot tell whether their
>>numbers are accurate to within 0.1%, 1.0%, 10%, or even worse.
>>
>>There are many other problems.  For instance, users of statistics
>>assume they have a random sample, even in cases where that is far
>>from clear
>
>[ List of other problems deleted ]
>
>Of course, anecdotal evidence also suffers from all of these
>problems. And in greater magnitude. This is true since it is a
>special case of statistical evidence. With a non-random sample set of
>one and no controls for observer bias.

Excellent point.

On the other hand, if everybody operates correctly based on their own
(non-random) experience in general the right outcomes will occur.

This is rational if the tertiary statistical evidence is unreliable.
What you are implicitly assuming is that the studies one reads were
done honestly and competently.  Yet, the chain of evidence is rather
weak.  Typically, we don't even know the people who did the study.

Given the many contradictory results obtained by social statisticians,
we have substantial evidence that there is something wrong with their
methods or their application.

Another way to judge the experts is through selected in depth studies
of their work.  I have not found the results to be encouraging.

>>The advantage of anecdotal evidence (in the sense we have been using
>>it) is that the person who is telling you the anecdote was there and
>>saw it.  You can cross-examine them and get a full understanding of
>>the evidence provided.
>
>The reporting of evidence involves different issues. If you want to
>believe that women are actually cut in two or that politicians are
>telling the truth anytime their lips are moving, thats one thing. If
>you want to tell me its true because you personally observed it,
>thats quite another.
>
>Given the failures of humans as observational tools, your story is
>unverifiable by me.  Perhaps through effective cross examination I
>can prove you wrong, but I can never prove you right with such a
>technique.  That will require other evidence outside the control of
>the observer ( statistical is just one available ).

Occasionally we may teach somebody else about something they observed
and change their mind.  This is not the same thing as proof, but it
is worthwhile.

In other cases, we may have beliefs about the integrity of our
observer.  We may believe that they will not intentionally lie.  That
means we can separate out their interpretations from the exact details
they can recall.

Even if our correct interpretation is not accepted by the observer,
that does not imply that we can learn nothing of interest.

>This doesn't make studies or statistical evidence true. Just more
>reliable than anecdotal evidence.

I should make clear that I have not ruled out statistical evidence as
a tool.  We have to be aware of its limitations.

>Humans who will lie about their observations will also produce flawed
>studies. Again the former (anecdotal) is unverifiable, but I can
>check the latter (statistical) independently.

Actually, it is cheaper and easier to develop an understanding of the
reliability of anecdotal evidence.  Often we may have known the
observer for some time and be able to form theories about their
character and ability to accurately interpret what they have seen.

We can ask them what they saw on different occasions and see if we get
about the same story back.  We can think about whether the person has
intentionally lied in the past and, if so, under what circumstances.
We can ask ourselves what motivation the person might have to lie.
Is there any benefit to giving a particular story?

>Relying on anecdotal evidence makes you susceptible to the magicians
>of the world. The honest ones use mirrors and their need is to
>entertain you enough to get your money. The rest use anecdotal
>evidence and emotional arguments (verbal misdirection?). Their needs
>are left as a test of the reader's naivete.

What needs are satisfied by the white cloaked priests of social
science?  Which of their needs are satisfied by their work?  Very few
of these people are independent.  They are often paid by people whose
interest in the truth is in question.  That means that the temptation
to fudge (and humans find ways to rationalize such actions) is very
powerful.  The people preparing the statistical studies are often
interested in the fame associated with career advancement.  None of
this is conducive to the search for truth.

>Evidence that can be verified independently by many observers
>increases the reliability. Experiments and polls can be done by me
>thus eliminating your bias.  Independent verification can also check
>for errors and check the parameters under which the evidence is true.
>Studies are done with certain assumptions and controls. The evidence
>loses its reliability when removed from this context.

I have serious doubts about these methods.  I am not a statistician,
so there is a possibility that I am simply ignorant.

However, I would expect that if these methods of determining the
accuracy of the statistics were effective, that it would be possible
to provide some sort of error analysis.

When we get a figure for the GDP we know that it is highly unlikely to
be exactly correct.  What is the probability that it is low by 10%?  I
fail to see how the figure can even be useful without this
information.

In the case of a complex study involving the measure of biases to
adjust the final conclusions, it would be most useful to discuss the
probability that the bias was not measured correctly.  This must
surely affect our final results, especially when many biases and other
measurements are combined.

Red Rackham







Thread