1994-04-26 - Re: Synthetic Apertures to Increase Resolution

Header Data

From: dat@@.spock.ebt.com (David Taffs)
To: tcmay@netcom.com
Message Hash: b63d9fe32c761baa7c6936414dd7b9c2816e07729f8c2b84fd2d6aaeb5c46244
Message ID: <9404262341.AA01385@helpmann.ebt.com>
Reply To: <199404262213.PAA21727@netcom.com>
UTC Datetime: 1994-04-26 23:42:21 UTC
Raw Date: Tue, 26 Apr 94 16:42:21 PDT

Raw message

From: dat@@.spock.ebt.com (David Taffs)
Date: Tue, 26 Apr 94 16:42:21 PDT
To: tcmay@netcom.com
Subject: Re: Synthetic Apertures to Increase Resolution
In-Reply-To: <199404262213.PAA21727@netcom.com>
Message-ID: <9404262341.AA01385@helpmann.ebt.com>
MIME-Version: 1.0
Content-Type: text/plain



   From: tcmay@netcom.com (Timothy C. May)

   > 
   > Could the same effect (as a segmented mirror) be achieved by taking multiple
   > pictures (from the same mirror) and processing them together? E.g. does
   > synthetic aperture radar actually produce higher resolution than achievable
   > from a single "snapshot"? If so, then this might work (at least for slow-moving
   > targets :-)...

   > dat@ebt.com (David Taffs)

   Yes, but the positional accuracy required (on the order of the
   wavelength) would be prohibitive to achieve. (Such things may be
   possible for the NRO's DSP (more acronym overloading: DSP stands for
   Defense Support Program) satellites to implement. I haven't heard any
   speculations that this is actually being done.)

   Synthetic Aperture Radar is feasible becuase the wavelengths are so
   much larger.

   The new Keck Telescope will eventually use a second telescope, now
   under construction, located some distance away, for very long baseline
   interferometry...I have no idea if it can be made to work as an actual
   synthetic aperture. Jay Freeman man know.

I wasn't thinking so much of interferometry techniques (although my
reference to synthetic radar certainly implies them), but rather something
on the order of a filter which might work (independent of the wavelength
of light) as follows:

Take, for example, the square box pixellation (is this the right word
here?) used to blot out people's faces on TV sometimes. Put a long
(preferably continuous) series of images into the computer, and build
a model of the movement of the person's head (the camera isn't
perfectly still; assume that the person, however, does stay
still). Use the data about how adjacent pixels change over time to
improve the model of what the person's face really looks like.

This is independent of the wavelength of light -- it does of course
depend on the resolution of the square pixels used to blot the peron's
face, but not particularly on the wavelength or resolution of the
camera (assuming it is much better than the square blotches).

I first noticed this effect watching Court TV's coverage of the
William Kennedy Smith rape trial (I was home sick at the time), while
the victim testified. I felt that as the person (and camera) moved
around, I could gradually form a better opinion of what the person
looked like than just provided by the square blotches, by noting when
and how the (macro-)pixels changed.

Of course, just filtering a single frame would be better than looking
at the sharp-edged squares. I'm talking about averaging all these
filtered images over time, compensating for movement of the camera and
subject. It would seem to me that over long enough time, perhaps using
more sophisticated mathematics than just averaging (although just plain
averaging seems like the right operation here), if there was actually
enough movement to provide enough resolution, you could eventually get
to a real photographic-quality image of the person.

This process might be similar to CAT scans, where a lot of
low-resolution "pictures" are combined to create a high-resolution
image, except the distribution would be temporal rather than spatial.

ObCryptoJustification:

I think is relevant to c'punks, because it involves decryption of
an encrypted signal (recovering the face of a person when it was
intentionally distorted). Does this mean that if people like Court
TV really want to blur people's faces, they need to add crypto-secure
noise instead of just averaging the micro-pixels into macro-pixels?
I think so!

ObRandomOtherThreadWithMarginalCryptoJustificationButInReplyToOtherCpunksMsgs:
and also
ObAdditionalMetaDiscussionAboutWhatIsAppropriateForThisList:

I also thought the license plate joke was definately relevant to
c'punks, because it was actually a code, where the cleartext domain
was conceptual rather than textual, just like this mail talks about a
domain in 2-space (or 3-space) images, rather than text. Also, the
fact that the "plaintext" was actually a pun involving multiple coding
schemes made it relevant to this list also IMHO. Also, I think short
humor is appropriate for any list, at least if it is both funny and
computer-related, but I admit that may be stretching it for some here.

I assume that coding (as distinguished from ciphering) is indeed relevant to
this list...

-- dat@ebt.com (David Taffs)




Thread