1993-02-09 - Re: Compressed/Encrypted Voice using Modems

Header Data

From: Phiber Optik <phiber@eff.org>
To: jim@tadpole.com (Jim Thompson)
Message Hash: fb7ae7e16ca0b2f8d155bb1b14b862e5c3db935f8350ac099ad62294b5296831
Message ID: <199302090915.AA07775@eff.org>
Reply To: <9302081750.AA02290@ono-sendai>
UTC Datetime: 1993-02-09 09:17:08 UTC
Raw Date: Tue, 9 Feb 93 01:17:08 PST

Raw message

From: Phiber Optik <phiber@eff.org>
Date: Tue, 9 Feb 93 01:17:08 PST
To: jim@tadpole.com (Jim Thompson)
Subject: Re:  Compressed/Encrypted Voice using Modems
In-Reply-To: <9302081750.AA02290@ono-sendai>
Message-ID: <199302090915.AA07775@eff.org>
MIME-Version: 1.0
Content-Type: text/plain


> 
> So, what is a 'typical' S/N ratio for a POTS call?
> 
> 

Good question.  By the way, I think I may have slipped and reversed my units
in signal to noise ratios.  A minor typo.  The signal and noise are in dBm's
(decibels per milliwatt), and the resultant S/N ratio is in dB's (decibels).
A little background info: the ideal voice frequency channel has a FLAT
amplitude/frequency response, that is, it's uniform over the pass-band
(approx. .3 to 3kHz).  In reality, this isn't the case, but we want it to
be as close as possible.  In North America, we test signal level at 1kHz
(precisely 1004).  If we input a signal at -10dBm, we want -10dBm at the 
output.  The common type of test-line in the phone system for this purpose
is nicknamed the "milliwatt test", and is a continuous interrupted 1004Hz
tone.  Depending on the nature of the channel being tested, there would be
acceptable guidelines that would have to be met with.  For example, a typical
S/N value might be 40dB, based on customer satisfaction of line quality.
Another common test-line in wide use is the type-105 ATMS, Automatic Trans-
mission Measurement System (nicknamed "responders").  Signalled with Multi-
Frequency tones (MF), it is capable of doing: signal at 404, 1004, and 2804Hz
at two different levels (for comparative S/N ratios at the low, middle, and
top portion of the passband, a major improvement on the older "sweep tone"
method), and two types of noise (again I remember at two levels), the most
common being C-message noise (dBrnC).  C-message weighting is the modern means
of measuring signal and noise amplitude/frequency response, based on today's
telephone handsets (there were two previous major Western Electric weightings,
144 and F1A, now obsoleted).  The standard reference frequency (1004Hz), was
established by picking a frequency in the pass-band where the signal level
was JUST discernible by the human ear, and is between -85 to -90dBm, and the
derived units are positive.  We consider the noise measurement knowing the
zero reference (ideally 0 dB difference at the reference frequency), and the
weighting characteristics of the C-message standard telephone handset.
Unfortunately, I can't draw you a chart, but there is a characteristic curve
of frequency-response in weightings of channel noise for the C-message handset.
Noise measurement instruments have artificial filters that simulate the
response of the modern handset.
Am I making any sense?  I hope I am.  What I'm getting at, is that the accept-
able guidelines of signal and noise levels is simply based upon a chosen
standard handset sensitivity.  Got it?  C'mon!  It's easy!






Thread