Monday, October 26, 2020

The Revolving Door between Color Science and the English Department

Past Hue Angles columns have featured examples of career changes from color science to other areas. (See Issue #250 [2011] on Terry Benzschawel’s transition to Wall Street quant and Issue #475 [2016] on Mike Stokes’s transition to data privacy.) 

In this article, I describe Suguru Ishizaki’s transition from color science to an English department. Such experiences can inspire hope for successful career transitions in the field of color science even in the current job crisis.

Ishizaki’s contribution to color science is heralded by his 1994 Color Imaging Conference paper [1], also extended in a successive paper [2]. He undertook the prodigious task of coloring sub-areas on a color-coded map or chart so that each sub-area, subject to spatial induction from its neighbors, would match an intended color in the key to the chart. The task is hard because every time you change a sub-area color, you must also change the neighboring areas to preserve all the color matches with the key. The process is iterative and multi-dimensional. To my knowledge, Ishizaki’s is the first and only attempt to capture and control such complicated and inter-dependent conditions for asymmetric matches. (Usually investigators look at only a center field as influenced by a single surround, and do not ask the matching question.)

Starting with this work (which led to his Ph.D. at the MIT Media Lab), Ishizaki built a career, alternately in academia and industry, based on a broader over-arching theme of human communication through design. He started at the Design School at Carnegie Mellon University (CMU), then worked at Qualcomm on early mobile applications, and ended up at CMU’s English Department, where he is now an Associate Professor. Dr. Ishizaki’s current research area is Technology-Enhanced Learning for writing and Computer-Assisted Rhetorical Analysis [3].

Several people I know started as English majors and ended up in color science. Bob Karpowicz, who became a product manager at Datacolor, had an undergraduate English major. Mike Tinker (who became an expert in color digital cinema at Sarnoff) started from a B.A. in English literature; then, as a graduate student in English, he wrote a computer program that recognized writers by their word patterns. That wasn’t accepted as a thesis topic, so Tinker pursued another topic to a Ph.D. in English with a minor in computer science. 

And I myself was an undergraduate English major, though this is unacknowledged on my diploma due to a binary choice being given to me on graduation day. (How English departments have changed since then!)

But whereas in all these cases the door of the English department was marked “Exit,” Dr. Ishizaki found a door marked “Enter.” I hope someday that he returns to color science to continue the career he started and that nobody else can match. Or perhaps someone else will continue his pivotal work.

[1] Ishizaki, S. Adjusting simultaneous contrast for dynamic information display. Proceedings of IS&T and SID's Color Imaging Conference, Scottsdale, 1994: pp 137- 140. 

[2] Ishizaki, S. Color adaptive graphics: what you see in your color palette isn’t what you get! CHI ’95: Conference Companion on Human Factors in Computing Systems. May 1995, pp. 300-301.

[3] https://design.cmu.edu/people/courtesy-appointment/suguru-ishizaki


Michael H. Brill

Datacolor


Monday, August 24, 2020

Why Colors Show Up as Icons in Mathematics

 In eerie resonance with Euclid’s definition of a point as “that which has no part,”  J. Lettvin’s Colors of Colored Things begins with the following: “Judgment of color (including brightness) seems not to depend on extension [… Redness] is like nothing else but itself, it cannot be decomposed or described, but only exhibited; it is a simple.” [1]  Lettvin goes on to discuss (in his unique way) the familiar complications of how vision transforms stimuli into color, but he retains the view that the judgment of color is a simple. I will now look at some implications of this idea.

Color as a simple is readily added to a geometrical object, and the color icons enrich the meaning. Examples range from traffic signals to the stylized footprints in an Arthur Murray dance studio. But mathematics offers some particularly interesting morsels. Three come to mind. One of these, the four-color map problem has been described in an earlier Hue Angles [2]. Another shows up in the title of Arthur Loeb’s book Color and Symmetry [3], in which permutations of color coding in a pattern enrich the geometric symmetries incurred by such operations as glides and reflections. Now I want to introduce you to a third, perhaps less familiar example, the road-coloring problem.

 

The road-coloring problem involves a network with directed paths between pairs of vertices. Under some surprisingly general conditions, it is possible to color-code the paths so that, given a destination vertex, a single set of instructions in the form of a sequence of color choices will bring you from any source vertex to the same destination vertex. The Wikipedia article on the road-coloring problem sets the context: “In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from.”  You start with a graph with numbered vertices and colored arrows between the vertices. The arrows are like one-way streets: the instructions (a sequence of path colors) assume you are always going in the direction of the arrow you’re on. To convince yourself that this behavior is possible, try the exercise based on the eight-vertex graph in Ref. [4]. 

 

The road-coloring problem started as a conjecture by Benjamin Weiss in 1970, but it took 38 years to prove. The proof came from Avraham Trahtman, a 63-year-old Israeli former security guard (who was a mathematician in his earlier life in the USSR) [5]. Trahtman [6] proved not only that the nominated graphs all had coloring sequences with the desired property, but also that one’s mathematical life can peak long after one’s teens and twenties.

 

Encouraged by checking the eight-vertex graph in Ref. 4, I wondered if I could make a simpler graph with only three nodes that had the same property. In the figures I show here, three nodes support two possible solutions, but I had to allow the possibility of paths from a node to itself. 

  


Drawing of two three-vertex road-coloring solutions (author, 2013).  The medium is felt marker on flip-chart paper, photographed in a cool-white-fluorescent-lit office.  Not surprisingly, the “red” looks very orange. My apologies, but I hope the idea is clear.


In the case of my first graph, if you live at vertex 1, all you have to tell your visitor is “take the red arrow from where you are to the next vertex (in the direction indicated by the arrow), and that will be node 1. That’s what I mean by the instruction R1 from anywhere (i.e., from vertex 1, 2, or 3). Similarly, if you live at vertex 2, your instruction is “take the green path one step from wherever you are.” If you live at vertex 3, your instruction is “take the blue path.” Because the arrows are like one-way streets, you must always go in the direction of the arrow you choose. 

 

In the second graph, there are still only three vertices, but the paths involve two steps and not just 1. Starting from vertex 1, 2, or 3, if you take two R steps, you end up at vertex 1. I denote that action as RR1, etc. But notice that I use only two colors of path instead of three (as in my first graph). There is a tradeoff between the number of colors and the length of the instruction string. 

 

What use is the road-coloring problem (now a theorem)? It serves very well in the theory of automata. To quote Weifu Wang [7], “When the automaton is running and encounters an error, and if the road coloring conjecture is true, the automaton can always follow a certain sequence and go back to the previous correct state, regardless of what error it encountered.” I think the “correct state” is the address of the person giving the instructions, and the “error state” is where the presumed visitor is when he gets instructions. It’s a little confusing to call the direction “back” when you’re proceeding forward along the arrows to get there. But synchronizing a move to an earlier known state seems the key to the application.

 

One place not to use the road-coloring theorem is in an Arthur Murray dance studio. Imagine giving a color-sequence instruction set to a bunch of dancers and have them all pile on top of each other when they (synchronously) reach the home vertex.

 

[1] J. Y. Lettvin, MIT RLE QPR 87, 1967, p. 193,  dspace.mit.edu/bitstream/handle/1721.1/55670/RLE _QPR_087_XIV.pdf   

[2] M. H. Brill, http://hueangles.blogspot.com/2013/03/

[3] A. Loeb, Color and Symmetry, Wiley, 1971.

[4] https://en.wikipedia.org/wiki/Road_coloring_theorem

[5] http://usatoday30.usatoday.com/tech/science/mathscience/2008-03-20-road-coloring-problem-solved_n.htm

[6] A. N. Trahtman, The Road Coloring Problem. Israel Journal of Mathematics, Vol. 172, 51–60, 2009

[7] W. Wang, The Road Coloring Problem. (2011). https://math.dartmouth.edu/~pw/M100W11/weifu.pdf

.

Michael H. Brill

Datacolor

Thursday, June 4, 2020

Color/BW Tropes in Cinema

Once again Carl Jennings has inspired a Hue Angles article from me. This time, Carl’s description of Olafur Eliasson’s black-and-white effect with narrowband light (ISCC News, Issue 489) reminded me of various color/black and white (BW) tropes in cinema. Whereas Tony Stanton’s Munsell 2018 presentation (http://www.iscc-archive.org/Munsell2018_Presentations/Stanton-Breakout-HistoryOfColorCinema.pdf) is a more serious history that highlights the use of color/BW as part of the technological evolution, my essay here highlights some artistic uses of color/BW.

I’ll begin with the Wizard of Oz (1939), wherein the black and white (actually sepia-tone dyed black and white) Kansas shots give way to the dazzling color of Oz. The transition wasn’t trivial: “A set was painted sepia tone and Bobie Koshay, Judy Garland's double was outfitted in a sepia dress and given a sepia make-up job. Koshay walks to the door and opens it, revealing the bursting color of Munchkinland beyond the doorframe. She steps out of the way of the shot and the camera glides through the door, followed by Judy Garland, revealed in her bright blue dress.” [1]

A similar trope occurs in Pleasantville (1998), in which real-life characters are injected into a black-and-white 1950s sitcom. Within the sitcom, the characters (and objects) appear in black and white until they transcend the repression implied by the sitcom and find emotional spontaneity and “modernity” of viewpoint. I find the message is too preachy, but if for nothing else, the film is noteworthy in being claimed to be the first new feature film created by scanning and digitizing recorded film footage to remove or manipulate colors (https://en.wikipedia.org/wiki/Pleasantville_(film) .

Certain resonances of Oz can be seen in Antonioni’s Red Desert (1964), wherein the entire movie has a ghastly blue-green cast (the color, not the actors), until a fantasy scene at the end that opens out to abundant color and lets the audience sigh in relief. That one is not a black-and-white trope, but a reduced-color trope that recalls the Oz transition, but on a more subtle level.

A more powerful recent trope appears in Schindler’s List (1993), which is filmed in black and white except for the Sabbath candles and a red coat worn by a young Jewish girl who is thereby individuated as a casualty of the Holocaust. The emotional effect was a coup by Spielberg. And it used digital techniques for color replacement five years before the vaunted “first” of Pleasantville.

Finally, I must mention the comedic send-up of Psycho’s (1960) shower scene in Mel Brooks’s High Anxiety (1977), which is entirely in color. A bell boy has not-so-pleasant words with a patron in a hotel. The patron (Brooks) wants his newspaper brought to him, and the bell boy waits until the patron is in the shower and then rips the curtain aside and hysterically stabs at him with the rolled-up newspaper. (“There’s your paper!”) The paper falls under the water, and the black ink dissolves—and swirls down the drain in a vortex exactly like the black and white rendered blood that flows down the drain in Psycho. Pan to the patron’s apparently dead face: “That kid gets no tip!” [see https://www.youtube.com/watch?v=__2HBkrrlp4]

There’s no limit to what can be done with the tension between color, and black and white. If you pay attention, you can see BW/color tropes in many other places. My most recent encounter was with the BW world comprising Saul Goodman’s drab alternate identity in Better Call Saul (TV Series). In fact, our editors have experimented with BW/color tension in recent issues of ISCC News.

Michael H. Brill
Datacolor

[1] D. Faraci, True movie magic: how the Wizard of Oz went from black & white to color, written 16 Sep 2013, https://birthmoviesdeath.com/2013/09/16/true-movie-magic-how-the-wizard-of-oz-went-from-black-white-to-color, website accessed 27 Feb 2020.

Monday, February 24, 2020

The Virtual Image: Keeping It Real

When recently asked to explain the operation of a magnifying glass, one of us (MHB) revisited an old conundrum from secondary school: the virtual image. When a viewed object is within the focal length of the lens, an image is formed that is not inverted relative to the viewed object. Far more mystifying, it is on the same side of the lens as the light source, not on the side to which actual light is conveyed. Yet we see virtual images all the time.

You can see the traditional picture of a virtual-image situation in the part of Fig. 1 that includes the “eyepiece” lens and everything to the left of it.  The “object” is a short, downward-pointing black arrow. The “virtual image” is the longer, dashed black arrow at the far left.  The base of the object and of the image lie on the axis of the lens. The point of the arrow in object and image are connected by two rays (dashed black lines delimiting a pink area). One of these rays passes through the center of the lens, and in the ideal (thin-lens) case this ray will be undeflected by the lens.  The other ray starts out parallel to the lens axis, and is bent by the lens so it passes through the focal point (not shown) that is to the right of the lens.  (A collection of such rays from the Sun could concentrate enough to cause a fire.)  If the object had been to the left of the left-hand focal point of the lens, this bent axial ray would have intersected the undeflected ray so as to form a real image: inverted relative to the object, on the same side of the lens as the light propagates, and generally understandable. But when, as in Fig. 1, the object lies to the right of the left-hand focal point, the two rays diverge to the right of the lens, and instead, we are told, we have to backtrack both rays until they intersect on the left-hand side, and this  intersection is part of the virtual image.

How can an image form that is coincident neither with the eye nor with the light that should be generating that image? Virtual or not, this deserves an explanation.

As optical engineers will tell you (if pressed), the virtual image is a way of expressing a geometric relationship without involving the actual eye.  As soon as you include the eye in the explanation (see Fig. 1) the paradox is resolved.  The rays that diverge as they pass to the right of the eyepiece are brought together in focus by the lens of the eye.  Where these rays meet, one finds a real image on the retina. This image is shown in Fig. 1 as a white arrow. It is a real image, being where the light ends up, where vision takes over, and in an inverted configuration that the visual system interprets correctly. (That is a subject for another essay.) 

The important thing to remember about the virtual image is that it is a shorthand to replace a fairly intricate relationship of the rays that form the real image if the explanation includes the eye. That relationship is only partially apparent in Fig. 1.  One gets the false impression that the bent axial ray from the eyepiece lens becomes the undeflected ray of the eye’s lens, and vice versa.  Actually, the bent axial ray from the eyepiece lens passes through the right-hand focal point of the eyepiece, which only accidentally happens to be near the center of the eye lens. 

A clearer picture of the eye’s focusing of the diverging rays is shown in Fig. 2, a ray-tracing simulation.  Here the optical elements are more stylized than in Fig. 1: the two lenses are red-tipped line segments, the object consists of a pair of radiating green dots, and rays from the dots proceed through both lenses and meet at the right-hand convergence points that depict the real image on the retina.  The virtual image is too far to the left to be captured by the figure.  We think that both Figs. 1 and 2 are needed to show how the eye acts in a virtual-image situation.

In short, experts know the virtual image and how to “keep it real.” Now you know too.
 
Michael H. Brill and Nilesh Dhote
Datacolor


Fig. 1. Drastically abbreviated depiction of a virtual image and the real counterpart that emerges when the eye is included. Please ignore the yellow region. The figure in context can be found at https://micro.magnet.fsu.edu/primer/anatomy/components.html.


Fig. 2.  Simulation of formation of a real image using two lenses. The divergence of the rays through the left-hand (eyepiece) lens is evident from the rays that proceed outside the lens aperture of the eye.  (See https://ricktu288.github.io/ray-optics/simulator/.)