About 30 years ago [1]
I encountered aerial photographs that were captivating, rich and strange. The
photographs were acquired with a camera geometry similar to that of a flash-attached
pinhole camera. The light flashed, reflected off the surface of the Earth, and
then returned to the camera, all via straight-line paths. But such an image
didn’t look at all like it came from a pinhole camera. Most of the spatial
features were familiar, but long black shadows appeared between them. We don’t usually
see cast shadows in flash-attached-camera images, because any object producing
such a shadow hides it from sight. That’s why the contrasts are so low and
unappealing in photographs from an old-style flash camera. But the new image
broke that rule, exhibiting shadows as if they were cast by a setting sun in
the evening: a romantic image, as it were.
What were these
strange cameras? To answer the “what” question, I must first answer “why,” and
that will break the romantic thrall. In the last century, interest in viewing the
Earth from space was beset by the problem that most of the Earth is having a
cloudy day just now (for any now). To see through the clouds, you need to use
light with long wavelengths. Microwaves worked, and they became the basis of
imaging radar systems.
Could you make a
pinhole camera system with a microwave light source? No—you couldn’t focus the
beam or tell where it was coming from when it returned. The designers of this
camera had to give up on the conventional idea of capturing on a flat film the
direction of a viewed object on the Earth (called a world point). Instead,
they did a clever thing. They flashed a complicated microwave pulse (called a
chirp) in all directions, and then captured reflected returns. They sorted the light-intensity
returns according to their time delay from the source (proportional to the
range of the world point) and also according to their time scaling (proportional
to Doppler effect). Oh yes, I must mention that the new camera had to be moving
relative to the Earth, and its velocity had to be known, whereupon this second
piece of information became proportional to the cosine of the angle from the
vehicle direction and that of the world point. If you know the range of a world
point, then that locates it on a sphere centered on the camera. If you know the
vehicle direction, then you know the angle between the velocity and the
direction to the world point, and that places the world point on a cone with
its vertex at the camera. Knowing the world point’s range sphere and Doppler
cone means that you have identified a circle in space on which the world point
must lie. Such circles are called projection circles.
Now let’s return to a
comparison of our new camera with a pinhole camera. If you look at a world point
along a line of sight through a pinhole camera, you can tell what line the
world point is on (identified by direction), but you can’t tell how far along
the line the world point resides. If you look at a world point for the new
camera, you know which projection circle the world point is on, but you can’t
tell which point on the circle is occupied by the world point. Somehow in this
imaging system, even though the light still travels in straight lines, the part
of the 3D world point location that is inaccessible on a 2D image is a circle
and not one of those straight lines. The romantic thrall has ended, but for me
the mathematical thrall has begun!
The new camera, by
the way, is called a synthetic-aperture-radar (SAR) system [2]. And that brings
me to another comparison with a conventional camera. Instead of ending up in a
light-sensitive medium such as film, the SAR’s rays enter a localized receiver
and are mathematically sorted to provide the coordinate locations (range and
Doppler) in a mathematically defined structure called a synthetic aperture. That
plane does not correspond to a physical object, but is a mathematical structure
in 3D. It’s not so strange, really. That kind of structure is common in
holography, hence the term “quasi-holographic” that is used to describe the SAR
technology.
Of course, you will need
to know how the conventional and new cameras work together to reconstruct the
three dimensions of a world point. The answer is: quite well. It is common
[3] to solve for a 3D point using a camera image and a SAR image (see Fig. 1). The
process is similar to triangulation as used by pairs of conventional cameras.
Now some of you may wonder
where the shadows enter all of this. The straight-line light propagation
certainly leaves cast shadows, but these shadows occupy noticeable area in a
SAR image (e.g., search SAR image example). The pixels there are dark because
no light intensity is directed to them by the math algorithm. The SAR shadows are
called layover. It’s ho-hum and official. Yet somehow, I sense we are “into
something rich and strange,” to offer an Ariel perspective.
This brings me to my final point. I don’t believe artists
have yet explored SAR technology as a medium for expression. So, following the
lead of Anish Kapoor as described by Carl Jennings’s essay in this issue, I hereby
deny anybody but me the right to use SAR in art. So there, IP attorneys!
Note: This essay is dedicated to Dr. Eamon B. Barrett, my
long-time friend and collaborator in imaging mathematics, who passed away March
30 after a long illness. MHB
[1] Brill M, Triangulating from
optical and SAR images using direct linear transformations, Photogram. Eng. &
Rem. Sensing 53 (1987), 1097-1102.
[2] Ausherman, DA, et al., Developments
in radar imaging, IEEE Trans. Aerospace & Electronic Sys. AES-20
(1984), 363-400.
[3] Qiu C, Schmitt M, Ziu X, Towards
automatic SAR-optical stereogrammetry over urban areas using very high resolution
imagery. ISPRS J Photogram & Remote Sens 138 (2018), 218-231.
Michael H. Brill
Datacolor
Fig. 1. Triangulation of a world point X
as the intersection of camera line-of-sight L and SAR projection circle C. The quantity
w2 is the velocity of the SAR sensor, and image points Y1
and Y2 are camera and SAR images of X. [adopted from
Ref. 1]