Monday, May 10, 2021

Into Something Rich and Strange


About 30 years ago [1] I encountered aerial photographs that were captivating, rich and strange. The photographs were acquired with a camera geometry similar to that of a flash-attached pinhole camera. The light flashed, reflected off the surface of the Earth, and then returned to the camera, all via straight-line paths. But such an image didn’t look at all like it came from a pinhole camera. Most of the spatial features were familiar, but long black shadows appeared between them. We don’t usually see cast shadows in flash-attached-camera images, because any object producing such a shadow hides it from sight. That’s why the contrasts are so low and unappealing in photographs from an old-style flash camera. But the new image broke that rule, exhibiting shadows as if they were cast by a setting sun in the evening: a romantic image, as it were.


What were these strange cameras? To answer the “what” question, I must first answer “why,” and that will break the romantic thrall. In the last century, interest in viewing the Earth from space was beset by the problem that most of the Earth is having a cloudy day just now (for any now). To see through the clouds, you need to use light with long wavelengths. Microwaves worked, and they became the basis of imaging radar systems.


Could you make a pinhole camera system with a microwave light source? No—you couldn’t focus the beam or tell where it was coming from when it returned. The designers of this camera had to give up on the conventional idea of capturing on a flat film the direction of a viewed object on the Earth (called a world point). Instead, they did a clever thing. They flashed a complicated microwave pulse (called a chirp) in all directions, and then captured reflected returns. They sorted the light-intensity returns according to their time delay from the source (proportional to the range of the world point) and also according to their time scaling (proportional to Doppler effect). Oh yes, I must mention that the new camera had to be moving relative to the Earth, and its velocity had to be known, whereupon this second piece of information became proportional to the cosine of the angle from the vehicle direction and that of the world point. If you know the range of a world point, then that locates it on a sphere centered on the camera. If you know the vehicle direction, then you know the angle between the velocity and the direction to the world point, and that places the world point on a cone with its vertex at the camera. Knowing the world point’s range sphere and Doppler cone means that you have identified a circle in space on which the world point must lie. Such circles are called projection circles.


Now let’s return to a comparison of our new camera with a pinhole camera. If you look at a world point along a line of sight through a pinhole camera, you can tell what line the world point is on (identified by direction), but you can’t tell how far along the line the world point resides. If you look at a world point for the new camera, you know which projection circle the world point is on, but you can’t tell which point on the circle is occupied by the world point. Somehow in this imaging system, even though the light still travels in straight lines, the part of the 3D world point location that is inaccessible on a 2D image is a circle and not one of those straight lines. The romantic thrall has ended, but for me the mathematical thrall has begun!


The new camera, by the way, is called a synthetic-aperture-radar (SAR) system [2]. And that brings me to another comparison with a conventional camera. Instead of ending up in a light-sensitive medium such as film, the SAR’s rays enter a localized receiver and are mathematically sorted to provide the coordinate locations (range and Doppler) in a mathematically defined structure called a synthetic aperture. That plane does not correspond to a physical object, but is a mathematical structure in 3D. It’s not so strange, really. That kind of structure is common in holography, hence the term “quasi-holographic” that is used to describe the SAR technology.


Of course, you will need to know how the conventional and new cameras work together to reconstruct the three dimensions of a world point. The answer is: quite well. It is common [3] to solve for a 3D point using a camera image and a SAR image (see Fig. 1). The process is similar to triangulation as used by pairs of conventional cameras.


Now some of you may wonder where the shadows enter all of this. The straight-line light propagation certainly leaves cast shadows, but these shadows occupy noticeable area in a SAR image (e.g., search SAR image example). The pixels there are dark because no light intensity is directed to them by the math algorithm. The SAR shadows are called layover. It’s ho-hum and official. Yet somehow, I sense we are “into something rich and strange,” to offer an Ariel perspective.


This brings me to my final point. I don’t believe artists have yet explored SAR technology as a medium for expression. So, following the lead of Anish Kapoor as described by Carl Jennings’s essay in this issue, I hereby deny anybody but me the right to use SAR in art. So there, IP attorneys!


Note: This essay is dedicated to Dr. Eamon B. Barrett, my long-time friend and collaborator in imaging mathematics, who passed away March 30 after a long illness. MHB



[1] Brill M, Triangulating from optical and SAR images using direct linear transformations, Photogram. Eng. & Rem. Sensing 53 (1987), 1097-1102.

[2] Ausherman, DA, et al., Developments in radar imaging, IEEE Trans. Aerospace & Electronic Sys. AES-20 (1984), 363-400.

[3] Qiu C, Schmitt M, Ziu X, Towards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery. ISPRS J Photogram & Remote Sens 138 (2018), 218-231.


Michael H. Brill




Fig. 1. Triangulation of a world point X as the intersection of camera line-of-sight L and SAR projection circle C. The quantity w2 is the velocity of the SAR sensor, and image points Y1 and Y2 are camera and SAR images of X. [adopted from Ref. 1]

Tuesday, February 16, 2021

Ruminations on Eating Photons

Michael H. Brill, Datacolor

(Send contributions to )

Plants see photons.

People see photons.

Plants eat photons.

Do people eat photons? I suspect not. It would be too light a diet. 

The above was my first reaction to Carl Jennings’s latest column, “Eating Color: Color Perception in Plants” [ISCC News # 492 (2020), pp. 5-8]. Carl wrote from the viewpoint of an artist who embodied the title metaphor in his works. I, of course, tend to pursue more technical implications—starting with a joke. And, unlike all the trite photon jokes I had seen on the Internet, this one seemed to have a serious teaching point.

Let’s start with the seeing of photons. The chemistry of vision involves amplifying a rather weak photon signal (weak because it must be divided up in space, time, and spectrum), and the agent of the amplification is the discharge of a battery. When the battery is discharged, it must be recharged (using a lot of metabolic energy) before it can be used again. (Sometimes, as with retinal rods, the battery gets discarded and replaced, not recharged.) Seeing, either by plants or by animals, involves treating the photon as a signal and amplifying that signal chemically. Any vision system, plant or animal, uses energy by combining oxygen with other elements; hence respiration is a prerequisite for seeing.

Now let’s proceed to the eating of photons. Photosynthesis also has a battery that is similar to vision’s battery, but the energy goes the other way. Not only the photo-active material, but the whole organism increases in mass and energy as a result of the incident photon energy. Carbon adds to the mass of the organism and oxygen is released. 

I’ve just described the eating of photons by plants. Do animals eat photons in the same way? No, and I think the reason is that animals are not able to use the photons as a direct energy source. They have to eat in other ways, which are familiar to us. Photons are too light a diet to sustain animals directly. [One must note a small exception of this rule, the creation of Vitamin D via the Sun’s UV radiation on skin.]

I published a little about this subject in ISCC News # 427 (2007), p.7: “Power to the Pupil” (not a Hue Angles column). There my main focus was the creation of batteries using rather large amounts of visual pigment from animals, and also the design of solar cells using principles very similar to those used in certain cameras.

Some of you might complain at this point about my colloquialism of “seeing photons” in place of “information-processing an electromagnetic signal” and “eating photons” instead of “transmuting electromagnetic power into stored energy.” In anticipation of such a complaint, I can only say that less colloquial language might deny me immortality in the immense archives of photon jokes that persist ready for simple Internet search. Enjoy. 


[Note: I recently learned that Ronald Penrod passed away August 12, 2020 at the age of 80. Ron was a pioneer in digital colorant formulation. His color-science career (1965-1991) was at Uniroyal Inc. (previously U.S. Rubber), where he added commentary and software to Ray Winey’s original 1962 report on the colorant-formulation method. This contribution was discussed in Bill Longley’s Hue Angles essay, "Color Creeks Ray Winey has Found People Up" ( MHB]