Thursday, November 4, 2021

If CIECAM is the answer, what was the question?

 (Send contributions to mbrill@datacolor.com )

 

Imagine, while studying in preparation for his next life as a color scientist, the ghost of Alex Trebek visits us in his former role and announces his truly Final Jeopardy answer:

“CIECAM.”  

 

The contestants blink and Trebek explains:

“If CIECAM is the answer, what was the question?”

And the contestants answer:

 

Contestant 1: “What model predicts symmetric color matches?” WRONG: That was CIEXYZ.

Contestant 2: “What model predicts asymmetric color matches?” WRONG: That was CIECAT.

Contestant 3: “What model predicts color difference?” WRONG: That was CIECAM-UCS.

Contestant 4: “What model allows a stimulus, in given viewing conditions, to be numerically described with correlates of perceptual attributes such as brightness, lightness, colorfulness, chroma, and hue?” [1] CORRECT: Although CIE’s color-appearance models, CIECAMs, are not the only possible models.

 

Contestant 2: “That’s not fair! I’ve seen CIECAMs tested by asymmetric matches, but never by the elusive ‘numerically described perceptual attributes.’”

 

Contestant 3: “Well, come to think of it, Luo et al. [2] describe experiments to test people’s ability to use particular perceptual attributes: ‘For the memory matching method, observers are first trained using the Munsell colour order system (or some other suitable system) until they are very familiar with these scales (i.e., Munsell Value, Chroma, and Hue) … In the magnitude estimation method, observers are asked to make estimates of the magnitudes of some perceptual attributes (e.g., lightness, colourfulness, and hue). It is essential that each observer clearly understands the perceptual attributes being scaled.’”

 

Contestant 2: “It sounds as if those experiments tested the memorability and amenability for scaling of particular coordinates of a particular color-order system. They cannot make a statement about color appearance independent of the color-order coordinates chosen for training the subjects. How do you know one CAM is better than another if the subject’s training has such a bias? And I understand the precision of these tests is pretty low. I still think there is no match-free way to test a CAM—or for that matter, to use a CAM for color management. Alex is wrong and we should have a recount.”

 

Trebek: Well, it’s time for me to go now. This discussion is turning into a quagmire, and it looks like real color-management systems rely on asymmetric match predictions anyway. So let’s ask a professional organization like the ISCC to sort it out. Meanwhile, I’ll have to tell my game-show successor that the right question for CIECAM is “What color-management model is not out of Jeopardy?”

 

[1] M. D. Fairchild and L. Reniff, A pictorial review of color appearance models, 1997 SID/IS&T Color Imaging Conference, first paragraph of Introduction.

[2] M. R. Luo et al., Quantifying colour appearance part I. LUTCHI colour appearance data. Color Res Appl 16: 168-180 (1991).

 

Michael H. Brill

Datacolor


Tuesday, August 17, 2021

Color-Coding the Pandemic

 

Michael H. Brill, Datacolor

(Send contributions to mbrill@datacolor.com )

Each of us has a different life story through the pandemic. My story does not include the uneasy “new normal” experienced by students in school. Part of the “new normal” requires students to attend school in staggered part-time schedules. How did kids react to this complication? In curiosity, I Googled my old high-school newspaper, the Brentwood Pow Wow. (Yes, the Native American name remains.) Immediately a web page appeared with an article for their April Fools’ edition: “Satire: Crayola Box Plan to Replace Original Tri-color Hybrid Plan;” author, Lilian Velasquez; dateline, 24 March 2021. This was going to be about color coding, about the resilience of young people, and maybe more.

The school had seen fit to illuminate the monthly calendar with color-coded parts to clarify three alternative student schedules. That was the original Tri-color plan. Ms. Velasquez started with a calendar illustrating the Tri-color plan (using the first three entries on the list below), and then “sprinkled in” the rest:

Teal: Fully remote students  
Gold: Hybrid students attend school on Tuesday and Fridays, and alternating Wednesdays.
Green: Hybrid students attend school on Monday, Thursday, and alternating Wednesdays.
Chocolate: Attend school 9 times a year, on the first Monday of each month.
Cherry: Attend school every day for only 4 hours each day from 9 a.m. to 1 p.m.
Magenta: Attend school only on Fridays for 16 hours.
Indigo: Attend school on the weekends from 7 a.m. to 2 p.m. (Saturday and Sunday)
Silver: Attend school twice a month on the 7th and on the 21st.

Velasquez then showed a typical one-month calendar annotated with a delightfully confusing panoply of font colors: a scheme that might give new meaning to the term “drop-out colors.” It’s the kind of gentle extrapolation one expects from high-school students in an April Fools’ satire. I remember reading such extrapolations and writing them. The genre was grounded in acceptance of the normal. Now it is the “new normal.”

Before I went to Russia in 2008 to teach English as a Second Language (ESL), I heard that Russians would characteristically respond to a story of complaint and indignation by declaring, “It is normal.” My trip confirmed that assertion. I think that every time we reset the condition that we consider normal, we rewrite the past to conform. It is a coping mechanism, and it is helped along by writers.

In the same vein, Jorge Luis Borges said: “Every writer ‘creates’ his own precursors. His work modifies our conception of the past, as it will modify the future.” The better the writer, the more responsibility this incurs.

Still, the past is not easily erased. Brentwood High School retains Native American metaphors. Our media preserve other metaphors, as does our collective memory—sometimes unconsciously. Borges himself, with his quote, immortalizes his own present (and our recent past) by using “his” instead of “their” in describing a hypothetical writer.

It is a delicately balanced narrative into which Ms. Velasquez entered as she wrote extrapolating a color code for the “new normal.” She writes well, and her underlying optimism can encourage us all. I wish her the best as she extrapolates further—we hope from a better “new normal.”

And perhaps her new color code foretells a career as an artist or color scientist!
 

Michael H. Brill
BHS Class of 1965

Monday, May 10, 2021

Into Something Rich and Strange

 

About 30 years ago [1] I encountered aerial photographs that were captivating, rich and strange. The photographs were acquired with a camera geometry similar to that of a flash-attached pinhole camera. The light flashed, reflected off the surface of the Earth, and then returned to the camera, all via straight-line paths. But such an image didn’t look at all like it came from a pinhole camera. Most of the spatial features were familiar, but long black shadows appeared between them. We don’t usually see cast shadows in flash-attached-camera images, because any object producing such a shadow hides it from sight. That’s why the contrasts are so low and unappealing in photographs from an old-style flash camera. But the new image broke that rule, exhibiting shadows as if they were cast by a setting sun in the evening: a romantic image, as it were.

 

What were these strange cameras? To answer the “what” question, I must first answer “why,” and that will break the romantic thrall. In the last century, interest in viewing the Earth from space was beset by the problem that most of the Earth is having a cloudy day just now (for any now). To see through the clouds, you need to use light with long wavelengths. Microwaves worked, and they became the basis of imaging radar systems.

 

Could you make a pinhole camera system with a microwave light source? No—you couldn’t focus the beam or tell where it was coming from when it returned. The designers of this camera had to give up on the conventional idea of capturing on a flat film the direction of a viewed object on the Earth (called a world point). Instead, they did a clever thing. They flashed a complicated microwave pulse (called a chirp) in all directions, and then captured reflected returns. They sorted the light-intensity returns according to their time delay from the source (proportional to the range of the world point) and also according to their time scaling (proportional to Doppler effect). Oh yes, I must mention that the new camera had to be moving relative to the Earth, and its velocity had to be known, whereupon this second piece of information became proportional to the cosine of the angle from the vehicle direction and that of the world point. If you know the range of a world point, then that locates it on a sphere centered on the camera. If you know the vehicle direction, then you know the angle between the velocity and the direction to the world point, and that places the world point on a cone with its vertex at the camera. Knowing the world point’s range sphere and Doppler cone means that you have identified a circle in space on which the world point must lie. Such circles are called projection circles.

 

Now let’s return to a comparison of our new camera with a pinhole camera. If you look at a world point along a line of sight through a pinhole camera, you can tell what line the world point is on (identified by direction), but you can’t tell how far along the line the world point resides. If you look at a world point for the new camera, you know which projection circle the world point is on, but you can’t tell which point on the circle is occupied by the world point. Somehow in this imaging system, even though the light still travels in straight lines, the part of the 3D world point location that is inaccessible on a 2D image is a circle and not one of those straight lines. The romantic thrall has ended, but for me the mathematical thrall has begun!

 

The new camera, by the way, is called a synthetic-aperture-radar (SAR) system [2]. And that brings me to another comparison with a conventional camera. Instead of ending up in a light-sensitive medium such as film, the SAR’s rays enter a localized receiver and are mathematically sorted to provide the coordinate locations (range and Doppler) in a mathematically defined structure called a synthetic aperture. That plane does not correspond to a physical object, but is a mathematical structure in 3D. It’s not so strange, really. That kind of structure is common in holography, hence the term “quasi-holographic” that is used to describe the SAR technology.

 

Of course, you will need to know how the conventional and new cameras work together to reconstruct the three dimensions of a world point. The answer is: quite well. It is common [3] to solve for a 3D point using a camera image and a SAR image (see Fig. 1). The process is similar to triangulation as used by pairs of conventional cameras.

 

Now some of you may wonder where the shadows enter all of this. The straight-line light propagation certainly leaves cast shadows, but these shadows occupy noticeable area in a SAR image (e.g., search SAR image example). The pixels there are dark because no light intensity is directed to them by the math algorithm. The SAR shadows are called layover. It’s ho-hum and official. Yet somehow, I sense we are “into something rich and strange,” to offer an Ariel perspective.

 

This brings me to my final point. I don’t believe artists have yet explored SAR technology as a medium for expression. So, following the lead of Anish Kapoor as described by Carl Jennings’s essay in this issue, I hereby deny anybody but me the right to use SAR in art. So there, IP attorneys!

 

Note: This essay is dedicated to Dr. Eamon B. Barrett, my long-time friend and collaborator in imaging mathematics, who passed away March 30 after a long illness. MHB

 

                                               

[1] Brill M, Triangulating from optical and SAR images using direct linear transformations, Photogram. Eng. & Rem. Sensing 53 (1987), 1097-1102.

[2] Ausherman, DA, et al., Developments in radar imaging, IEEE Trans. Aerospace & Electronic Sys. AES-20 (1984), 363-400.

[3] Qiu C, Schmitt M, Ziu X, Towards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery. ISPRS J Photogram & Remote Sens 138 (2018), 218-231.

 

Michael H. Brill

Datacolor

 

 

Fig. 1. Triangulation of a world point X as the intersection of camera line-of-sight L and SAR projection circle C. The quantity w2 is the velocity of the SAR sensor, and image points Y1 and Y2 are camera and SAR images of X. [adopted from Ref. 1]

Tuesday, February 16, 2021

Ruminations on Eating Photons

Michael H. Brill, Datacolor

(Send contributions to mbrill@datacolor.com )

Plants see photons.

People see photons.

Plants eat photons.

Do people eat photons? I suspect not. It would be too light a diet. 

The above was my first reaction to Carl Jennings’s latest column, “Eating Color: Color Perception in Plants” [ISCC News # 492 (2020), pp. 5-8]. Carl wrote from the viewpoint of an artist who embodied the title metaphor in his works. I, of course, tend to pursue more technical implications—starting with a joke. And, unlike all the trite photon jokes I had seen on the Internet, this one seemed to have a serious teaching point.

Let’s start with the seeing of photons. The chemistry of vision involves amplifying a rather weak photon signal (weak because it must be divided up in space, time, and spectrum), and the agent of the amplification is the discharge of a battery. When the battery is discharged, it must be recharged (using a lot of metabolic energy) before it can be used again. (Sometimes, as with retinal rods, the battery gets discarded and replaced, not recharged.) Seeing, either by plants or by animals, involves treating the photon as a signal and amplifying that signal chemically. Any vision system, plant or animal, uses energy by combining oxygen with other elements; hence respiration is a prerequisite for seeing.

Now let’s proceed to the eating of photons. Photosynthesis also has a battery that is similar to vision’s battery, but the energy goes the other way. Not only the photo-active material, but the whole organism increases in mass and energy as a result of the incident photon energy. Carbon adds to the mass of the organism and oxygen is released. 

I’ve just described the eating of photons by plants. Do animals eat photons in the same way? No, and I think the reason is that animals are not able to use the photons as a direct energy source. They have to eat in other ways, which are familiar to us. Photons are too light a diet to sustain animals directly. [One must note a small exception of this rule, the creation of Vitamin D via the Sun’s UV radiation on skin.]

I published a little about this subject in ISCC News # 427 (2007), p.7: “Power to the Pupil” (not a Hue Angles column). There my main focus was the creation of batteries using rather large amounts of visual pigment from animals, and also the design of solar cells using principles very similar to those used in certain cameras.

Some of you might complain at this point about my colloquialism of “seeing photons” in place of “information-processing an electromagnetic signal” and “eating photons” instead of “transmuting electromagnetic power into stored energy.” In anticipation of such a complaint, I can only say that less colloquial language might deny me immortality in the immense archives of photon jokes that persist ready for simple Internet search. Enjoy. 

_______

[Note: I recently learned that Ronald Penrod passed away August 12, 2020 at the age of 80. Ron was a pioneer in digital colorant formulation. His color-science career (1965-1991) was at Uniroyal Inc. (previously U.S. Rubber), where he added commentary and software to Ray Winey’s original 1962 report on the colorant-formulation method. This contribution was discussed in Bill Longley’s Hue Angles essay, "Color Creeks Ray Winey has Found People Up" (http://hueangles.blogspot.com/2009/). MHB]