Color Constancy:
Estimating object and surface color from the data.
In our last post, we introduced a new kind of computation
that we said was supposed to help us achieve something called perceptual constancy. That term referred
to the ability to maintain some kind of constant response despite a pattern of
light that was changing. For example, complex cells in V1 might be able to
continue responding the same way to a line or edge that was at different positions
in the visual field. This would mean that even when an object changed position
over time because you or the object were moving, your complex cells might be
able to keep doing the same thing throughout that movement. This is a useful
thing to be able to do because your visual world changes a lot as time passes, but in terms of the real objects and surfaces
that you’re looking at, the world is pretty stable. Think about it: If you just
move your eyes around the room you’re sitting in, your eyes will get very
different patterns of input with each eye movement. At the same time, the room
isn’t actually changing all that much. It has the same stuff in it, probably in
mostly the same 3D positions. Perceptual constancy, if we can pull it off,
gives us a way to maintain a signal that tells us about these stable properties
of the world even though the raw information varies substantially.
Figure 1 - As I move my eyes around this visual scene (avoiding my horrible cat, Leila), the images I see of my feet change a lot. If the yellow circles indicate what would be visible to a particular cell somewhere in my visual system, those feet look quite different as my gaze moves. Nonetheless, the feet stay the same.
This little example serves as an important introduction to
what is really a fundamentally different part of our discussion. So far, we’ve
been talking almost exclusively about sensation,
or the way you measure information about the world. In our case, we’ve been
describing how parts of your eye and your brain record patterns of light,
turning different wavelengths of light that are at varying intensities and
locations in further signals that your nervous system can process. Up to now,
we’ve been describing how different stages of visual processing involve more
and more sophisticated measurements that both limit what you can measure about
your visual world, but also provide multiple channels of information describing
properties like wavelength, local contrast, orientation, size and position.
What we’re doing next (and for the rest of our discussion, really) is trying to
figure out how measurements like those can be used to recover properties of the
3D world that produced the patterns of light we get to measure with these early
stages of vision. That is, we don’t want to have an experience of patterns of
light as we look around the world – we want to have an experience of objects
positioned in visual scenes with surface properties, colors, names, and all the
other meaningful properties we can come up with
to describe the things we see. This new set of challenges falls more
clearly under the heading of perception.
What we’re doing now is using, or interpreting, the raw data we get from our
sensory mechanisms to try and make estimates of what’s really out there in the
world. We’ll see that this turns out to be very hard. Most of the time, we
don’t have enough data to come up with a perfect guess about the real world, so
we have to use some kludges to help us get to a good-but-not-perfect answer.
Still, most of the time these kludgy procedures do pretty well, allowing us to
have consistent and meaningful perceptions of our visual world.
Figure 2 - This figure uses some of the language of computer graphics, but the larger point is this: The real world has actual objects and surfaces in it (the spheres in the back of the image), but we only get to measure the light at the retina (the screen in the front). Perception is all about making good guesses about the real state of the world from the sensory data.
The first step towards understanding some of these
perceptual mechanisms that involve going “beyond the data” to provide
information about the real world beyond our visual system will involve
revisiting a simple property of light that we’ve talked about a good bit
already: color. We know by now that color has something to do with the
wavelengths that are present in a mixture of lights, and we’ve seen how both
your cones and cells in the parvocellular layer of the LGN contribute to
measuring wavelength-dependent information about light. However, we’re not just
interested in the light anymore! Instead, we’re interested in the objects that
produced the light. With that in mind, consider the images below:
Figure 3 - The wavelengths of light you see over the course of the day may change, but the house remains the same.
These are two pictures of the same house at different times
of day. There are two things here I’d like you to notice. First of all, if I
asked you what color the house was in both pictures, my guess is that you’d
probably say about the same thing. That is, you’re probably experiencing some
degree of color constancy as you look
at the two images of the house – parts of it that look yellowish in one image
probably look yellowish in the other, for example. However, if you take a close
look at two patches that come from the same place in both images, you can see
that the color is actually very different! I’d call one of those “Yellow,” but
the other one I’d probably call something like “Gray-Green.”
What’s going on here? Your cones and your LGN cells are
sending very different signals to the rest of your visual system from these two
parts of the image, but nonetheless you would probably look at these and say
that they both look yellow. Another way to think about this phenomenon is to
consider what happens (or really what doesn’t happen!) when you park your car
somewhere in the morning and return to it in the early evening. Again, despite
how different the light actually is at those two times of day and how much that
can change the raw wavelengths coming from your car to your eye, no one walks
up to their car in the evening and wonders why it’s a different color than it
was earlier in the day.
There’s a flip-side to this phenomenon, too. As we’ve just
seen, very different wavelengths of light can end up looking like they’re the
same color, but it can also be the case that the same wavelengths of light can
also look like very different colors. Consider the two images below – the
squares we’re pointing to in each image are physically the same. My guess,
though, is that you probably think they look like very different colors
(reddish and bluish is my guess).
Figure 4 - The wavelength information coming from the two squares marked with arrows is identical, yet one looks blue while the other looks red.
These funny effects
aren’t just limited to color, either: You can see similar kinds of outcomes
even if we’re just talking about how light or dark different parts of an image
look. Consider the checkerboard-shadow illusion below developed by MIT’s Ted
Adelson:
Figure 5 - The squares at A and B are the same intensity, but you probably see them as having very different reflectance properties.
Just like those cubes in the previous picture, the squares
marked “A” and “B” in this picture are actually physically the same. My guess
is that one of them looks a good bit lighter than the other, though. If you
don’t believe me in either case, you can punch some holes in a piece of paper
and cover up everything but the two parts of the image you want to compare. If
you do that, you should be able to see that those parts of the image that
looked so different are actually reflecting the same raw light to your eye. The
same trick (looking at a color through a small hole) will also help you see how
different the same color can look under different circumstances. The fact that
this works gives us a big clue about what your visual system might be doing and
how it might be doing it: Whatever you’re doing to come up with these different
estimates of color, it seems like it involves using a larger region of the
image to decide what a particular part actually looks like. But what are you
doing? How do you adjust the raw wavelength information to come up with these
different experiences of color? To give you a little more of a handle on the
phenomenology of color constancy, I would strongly recommend that you try some
of the exercises in the Color Constancy Lab exercises, in which you’ll
encounter a truly perplexing image of some strawberries (see below) and examine
your color perception subject to changes in the wavelengths that are getting to
your eye and the size of the image regions you use to make color judgments.
Figure 6 - Kitaoka's "No Red Pixels" strawberries - that red you see? It's not really there.
Comments
Post a Comment