1996-05-10 - Re: Mandatory Voluntary Self-Ratings

Header Data

From: “Vladimir Z. Nuri” <vznuri@netcom.com>
To: “Joseph M. Reagle Jr.” <reagle@MIT.EDU>
Message Hash: f716fbca51a320ec609ef977cb7af00d559f030cb8d5797d7659f488f025d634
Message ID: <199605091846.LAA28295@netcom12.netcom.com>
Reply To: <9605091517.AA09590@rpcp.mit.edu>
UTC Datetime: 1996-05-10 11:34:31 UTC
Raw Date: Fri, 10 May 1996 19:34:31 +0800

Raw message

From: "Vladimir Z. Nuri" <vznuri@netcom.com>
Date: Fri, 10 May 1996 19:34:31 +0800
To: "Joseph M. Reagle Jr." <reagle@MIT.EDU>
Subject: Re: Mandatory Voluntary Self-Ratings
In-Reply-To: <9605091517.AA09590@rpcp.mit.edu>
Message-ID: <199605091846.LAA28295@netcom12.netcom.com>
MIME-Version: 1.0
Content-Type: text/plain


JR:

>        I've figured out where my differences between myself and others
>lay. The _only_ system and service that I am aware of that is distributing
>PICS labels is RSAC. (http://www.rsac.org) They are what one could call an
>objective and non-arbitrary content rating system rather than an
>"appropriateness" system. 

I don't like the use of the term "objective" here. (I object!!)
this is the point that I brought up in an earlier post: some people
seem to think that a label like "sex: moderate" is in fact an
"objective" label. but it is a subjective judgement. perhaps
a judgement like "child approved" is more subjective than "sex: moderate",
but they are both value judgements.

"objective" is a pretty important term to apply to anything, including
ratings. I'd like to see it reserved for systems that require no
human judgement whatsoever, i.e. are automated. for example, I would
say that a engine that creates ratings based on keywords found in
a document would be "objective". but anything that involves a human
decision cannot be called "objective" in my view.

the RSAC system seems somewhat reasonable to me. it appears to predate
PICS somewhat and picked up on it once it was available.

this from the web site you mention, in the press releases section:

>   The RSACi rating system is a fully-automated, paperless system that     
>   relies on a quick, easy-to-use questionnaire that the Web master
>   completes at RSAC's homepage for free. The questionnaire runs through
>   a series of highly specific questions about the level, nature and
>   intensity of the sex, nudity, violence, offensive language (vulgar or
>   hate-motivated) found within the Web master's site.                   
>   
>   Once completed, the questionnaire is then submitted electronically to
>   the RSAC Web Server, which tabulates the results and produces the html
>   advisory tags that the Web master then places on their Web site/page.
>
>   A standard Internet browser, or blocking device that has been
>   configured to read the RSACi system can recognize these tags, enabling
>   parents who use the browser to either allow or restrict their    
>   children's access to any single rating or combination of ratings.  

now, it seems that the author might as well put the tags in his material
himself instead of going through this submission process. furthermore
I again object that this be called an "objective" system. first, the
author of the page has to properly answer the questionaire. secondly,
we are talking about the author himself, not an impartial third party.
even if the the rating party was not the author, I would hesitate to
call it "objective". (unfortunately "objective" is a term applied to
things like newspapers that have detectable slants. what I guess is
that we have an objective-subjective continuum, and imho only purely
computational, algorithmic processes are truly "objective").

also, above we have the claim it is "fully automated". what??? it
sounded to me like the page designer has to submit a special form
to this service and then go and grab the tags to manually put in his own
page? this is "fully automated"??? 

I'm glad that RSAC is doing what they are doing, but the above system
is not objective, and neither is it a "market rating" in the sense
I described-- a third-party rating by someone other than the creator or 
author of the document.

also, JR, you say the system does not determine "appropriateness". 
but in my view it does indirectly. an author can "falsify" his submission
to say that his page has no sex or violence. (who is to say he is
wrong? the internet ratings police?) this will implicitly determine
the "appropriateness" of his page for people who screen their
browswers based on the keywords that were affected.

in general, I think all the examples I have seen so far show the
superiority of a third-party market-rating system over self-ratings.

self ratings can be corrupted and falsified by creators. third-party
ratings are more useful imho because you have a third party with their
own agenda, and you implicitly agree to their agenda. you don't
know the agenda of the author of the document, but you do, roughly,
of the rating service. (i.e. they might be "Christian Coalition, 
Atheist Zealots", or whatever)

self-ratings have the problem that people are going to pressure page
writers to include certain kinds of tags. third-party ratings have
no such deficiency. in fact the system is invisible to the page 
creator, as it should be. (in my view ratings and the content should
be made as independent from each other as possible in the sense
that ratings are not tied up in the content itself)

if the above is any measure,
RSAC press releases are awfully misleading based on their uses of
terminology and I hope they get their act together in this regard.

if there is a market-driven RSAC rating thing going on not described
in the above article, I'd like to see it. but the above excerpt does
not describe a market-driven system.





Thread