Skip to main content

Observing the retina (and what it can do)

Observing the retina (and what it can do)
Now that we’ve seen how images are formed inside of a pinhole camera, we have a sense of how patterns of light from the environment become patterns of light inside the eye. The next question is how those patterns become signals that can be sent from the eye to the brain. This process is called transduction, and within the eye, the structure that actually transduces light is called the retina. How does this bit of tissue sense light? Something must be happening that turns light into an electrical signal, but what? We’ll develop a quantitative model of how this works, but first, we’ll try to develop a basic understanding of the retina based on some simple observations. Compared to some of our previous discussions, this is going to be a little trickier – the retina is inside our eye, for example, so we can’t just look at the parts of it the way you were able to look at your own pupil. Instead, we’re going to adopt a dual strategy of (1) Making some observations about our own vision and how it varies depending on what part of the retina we’re trying to use, and (2) Talking about some historical observations that helped vision scientists develop some ideas about how the retina might work. I’m going to be honest here and tell you that I’m not presenting these in any meaningful historical order, but rather introducing different ideas as they help us elaborate our understanding of the retina. I’m also going to be pretty light on names and dates. You can find this kind of stuff out lots of other places, so if you’re interested, you can go look for that content on your own.

The retina is heterogeneous
The simplest set of observations we can make about the retina is based on how our visual capabilities tend to vary as we try to see things in different parts of our visual field. Remember, images from the outside world are basically projected onto the retinal surface, so if something is in a different part of your visual field, that means a different part of your retina is receiving that light. A simple guess about how the retina works would be to imagine that it’s kind of like actual film in a camera: Every bit of it does essentially the same thing. It’s easy to see that this is NOT the case for our retinae, though.
            First of all, let’s try something very simple – can you sense light with all parts of your retina? As you should have seen in Lab #4, I’m only bringing this up because the answer is no! Close your left eye and stare right at the letter “x” below. Now move your head back and forth until the dot disappears – you should be able to find a spot about 10 degrees or so to the right of the X where the dot vanishes. If you feel ambitious, you can try to move your head around a bit to see how big this “phantom zone” is, but at the very least, it’s kind of interesting that its there. This is your blind spot, and it is our first demonstration that your retina is not homogeneous.


Figure 1. By looking at the plus sign with your right eye, you should be able to find a distance at which the dot completely vanishes. 


We can’t easily test this out ourselves, but I want to tell you about another blind spot that it turns out you have. If we were able to show you very small dots of blue light, you would find that these are difficult to see in the center of your visual field. That is, there is a sort of “blue blind spot” in central vision. Again, we’re going to leave this fact here for you as an example of retinal heterogeneity for now, but later we’ll have more to say about what this varying sensitivity to specific colors across the retina means. At the very least, this is some evidence that both your ability to see anything varies across the retina, as does your ability to see some specific color information.
            Speaking of color, here’s another set of phenomena that hints at some varying sensitivity to color across the visual field: How good are you at recognizing colors in central vision relative to peripheral vision. In Lab #4, we asked you to test yourself at this task using some basic color stimuli, and you probably found that naming colors becomes VERY hard as objects appear away from your center of gaze. You’ll also find that seeing fine details becomes much harder away from your central vision, too. What’s this about? Again, it seems like there’s visual information that’s easier to measure with some parts of your retina than others. But why? What does this tell us about how the retina works to sense light? To say a bit more, we’re going to do something that might seem counter-intuitive for studying how light is encoded.

Figuring out how you see light by sitting in the dark
I want you to imagine something. Imagine that you’ve just walked from a brightly lit movie theater lobby into the theater itself. The lights are off and there’s nothing on the screen just yet, so everything’s dark. What can you see?


Figure 2 - A schematic view of that time all the lights went out all of a sudden.


“Nothing,” you say, “This is a stupid question. I can’t see a thing.” You’re right – at least for a bit. I say this because if you continue to sit in that dark theater, I’m sure you’ll agree that things start to improve for you. Over time, your eyes will adjust to the darkness, making it easier to see a little more at first and ultimately perhaps quite a lot as you continue to sit in relative darkness. By itself, this phenomenon (called “dark adaptation”) is interesting in its own right and raises some fun questions about how the retina works. For now, though, I want to use this phenomenon to give us some hints about different mechanisms that contribute to your ability to see things with your retina.
            To do this, we need to stop just imagining things and start talking about some experimental work we could do to examine how dark adaptation unfolds in detail. In particular, here’s something we might want to know more about: How does your ability to see change over time during dark adaptation? Obviously it gets better, but how much better and how quickly? To answer this question, we’d have to develop some kind of experiment to carefully measure how well someone can detect light after they’ve been sitting in a dark room for a specific amount of time. I won’t say much now about exactly how this would work (which would lead to a discussion of psychophysical testing techniques), but here’s the gist of it: If we knew someone had been sitting in the dark for some amount of time (say, 10 minutes), we’d like to be able to measure the faintest light that they could reliably detect. If a small amount of light was too dim, maybe they wouldn’t see it. If it was just a little brighter, maybe they would. What we’d like to know is the smallest amount of light that they could see as a function of time. That is, for each possible amount of time, what’s the faintest light you can detect? If we had all those measurements, we could make a graph of it to try and understand the dark adaptation process a little better. I’m going to go ahead and make some guesses about the shape of that graph based on my own experience of sitting in dark rooms (Figure 3). All of these graphs reflect different ways that you might get better at detecting faint lights over time.


 Figure 3 - Different ways your vision might improve over time as you sit in a dark room and adapt to low light levels.

Now for the fun part: The actual graph you get if you do this experiment with real people who have been sitting in dark rooms looks more like this:



A bit unexpected, eh? Sensitivity to light is obviously improving as time goes on, but the actual shape of the graph has a surprise in store: It has two parts! The fancy way to say this is that it’s biphasic, but the important thing about the graph is that it’s telling us that describing how the retina responds to darkness may have something to do with two mechanisms rather than just one. It sure looks like there’s an early phase of dark adaptation during which you get better pretty quickly, but start to level off in terms of how well you can see faint lights. However, shortly after that plateau, it looks like you continue to improve more slowly until you reach a final plateau after something like 12-15 minutes. I’m not saying this is an inevitable conclusion, but that two-step graph seems lke a good reason to start thinking about a retina that has two different kinds of stuff for sensing light.

Photopic and scotopic vision
If we’re right about there being two kinds of stuff for sensing light in the retina, this experiment also hints at a neat way to examine how those two kinds of stuff work. If only one kind of light-sensing stuff helps you sense light under very dim conditions (what we’ll call scotopic viewing conditions), then dark adapting people for a long time and then testing their vision would allow us to measure what that mechanism can do compared to the mechanism that’s also useful under brighter (or photopic) viewing conditions. So let’s do it! Or rather, let’s hear about what other people found out when they did experiments like this.
            One big difference between photopic and scotopic vision is your ability to see color. Under scotopic viewing conditions, observers are quite unable to see color much at all, in fact. Color information appears to be available primarily under photopic viewing conditions. We’ve already seen that color is related to the wavelength of light, so this might make us want to examine sensitivity to light under photopic and scotopic conditions more carefully. We’ve already seen that scotopic vision is more sensitive to light over all, but does it matter what wavelengths of light we’re talking about? More precisely, how does your sensitivity to wavelength change for photopic vision relative to scotopic vision? What we find in this case is that the wavelengths you are most sensitive to changes a little bit as a function of photopic vs. scotopic viewing (Figure 5): Your peak sensitivity as a function of wavelength shifts just a bit, a phenomenon that is called the Purkinje shift.

Figure 5 - At night (or under scotopic viewing conditions), the wavelengths you are most sensitive to are a little shorter than during the day (or under photopic viewing conditions). This is another hint that there are two different kinds of photosensitive material in the retina.

The bottom line is therefore that our hunch from looking at dark adaptation graphs seems more and more reasonable: There may really be two kinds of light-sensitive stuff in the retina, one that governs what you can do under scotopic conditions and another that contributes to what you can do under photopic conditions. That would explain a lot of this phenomenology, at least, so now let’s get serious: Are there in fact two mechanisms for sensing light back there?

Observation and anatomy
Alright, let’s get serious – we’ve been avoiding making detailed observations of the retina itself because we said it was hard. It is trickier, but it’s not impossible. If we had a bit of retina that we could look at under a microscope, what would we see? Would it help us understand the observations that we’ve been making?
            What we’ll see back there (if we get past the vasculature and other stuff that sits between the retina and the pupil) are a bunch of cells called photoreceptors that are capable of sensing light. We can see that they’re capable of sensing light because they’ll have this pigmented stuff on them that bleaches when exposed to light (but see our discussion of Lab #3 for more about this!). Perhaps more importantly for our present discussion, these cells will have different shapes to them if we look at a portion called their outer segment. One kind of cell will have an outer segment that’s cylindrical (or rod-shaped), while the other kind of cell will have an outer segment that tapers (or is more cone-shaped). For lack of better words, let’s call these cells rods and cones (Figure 6a). By itself, this is pretty neat – we thought there could be two kinds of light-sensing cells in the retina based on our observations, and here they are!
            Now that we’ve found them, we might decide to try and figure out where those cells are in the retina by looking for rods and cones across the retinal surface. If we do that, we’ll find out something else that’s kind of neat: Rods and cones are distributed very differently across the retina. Cones are very dense in central vision (and drop off quickly as we move towards the periphery), while cones are absent from central vision and have a sort of rise-and-fall distribution as we move to the periphery. (Figure 6b).

Figure 6 - At left, schematic views of what rod and cone cells look like. Note the differently shaped outer segments that give them their names. At right, a graph of how rods and cones are distributed across the retina. Central and peripheral vision differ functionally and also clearly differ in terms of which cells are in each part of the retina. (Rod/Cone diagram: Piotr Sliwa.Skela at en.wikibooks [Public domain], from Wikimedia Commons; Distribution diagram:Cmglee [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], from Wikimedia Commons).

This is neat because it helps us link up a number of different observations with the anatomy of the retina. We knew that there was probably one kind of mechanism that governed scotopic vision and another that contributed to photopic vision. Now that we’ve seen rods and cones, it’s not a bad guess that the scotopic/photopic divide probably has to do with rod vs. cone vision. But which is which? We also knew that scotopic vision didn’t allow you to see color, and that you also couldn’t see color in your peripheral vision.  And what cells are in your peripheral vision? Pretty much just the rods, so scotopic vision (and peripheral vision) almost certainly has to do with the light-sensing properties of the rods. Photopic vision must differ from scotopic vision largely because of the contribution of the cones, which are concentrated in central vision.


Oof – this is a lot to think about, but we’ve figured out some important things about the retina by combining observations of visual function with observations of anatomy. Our next step is to take this one step further by trying to come up with a good computational description of what rods and cones are doing in the retina when they transduce light. This will mean talking about the phenomena we saw in Lab #3 more concretely, with an eye towards building a quantitative model of how photopigments respond to light. 

Comments

Popular posts from this blog

Lab #4 - Observing retinal inhomgeneities

Lab #4 - Observing retinal inhomgeneities Back-to-back lab activities, but there's a method to the madness: In this set of exercises, you'll make a series of observations designed to show off how your ability to see depends on which part of your retina you're trying to see with. Here's a link to the lab document: https://drive.google.com/file/d/1VwIY1bDNF4CI4CUVaY5WSvQ0HxF9Mn6Y/view When you're done here, we're ready to start saying more about the retina and how it works. Our next posts will be all about developing a model that we can use to describe the retina's contribution to your vision quantitatively, so get ready to calculate some stuff!

Lab #3 - Photopigments

Lab #3 - Photopigments Our next task is to work out how you translate the image formed in the back of a pinhole camera into some kind of signal that your nervous system can work with. We'll start addressing this question by examining photopigments  in Lab #3. To complete this lab, you'll need access to some sunprint paper, which is available from a variety of different sources. Here's where I bought mine:  http://www.sunprints.org . You can find the lab documents at the link below: https://drive.google.com/file/d/17MVZqvyiCRdT_Qu5n_CtK3rVcUP0zoOG/view When you're done, move on to the Lab #4 post to make a few more observations that will give us a little more information about the retina. Afterwards, we'll try to put all of this together into a more comprehensive description of what's happening at the back of the eye.

Color Constancy: Intro

Color Constancy: Estimating object and surface color from the data. In our last post, we introduced a new kind of computation that we said was supposed to help us achieve something called perceptual constancy . That term referred to the ability to maintain some kind of constant response despite a pattern of light that was changing. For example, complex cells in V1 might be able to continue responding the same way to a line or edge that was at different positions in the visual field. This would mean that even when an object changed position over time because you or the object were moving, your complex cells might be able to keep doing the same thing throughout that movement. This is a useful thing to be able to do because your visual world changes a lot as time passes, but in terms of the real objects and surfaces that you’re looking at, the world is pretty stable. Think about it: If you just move your eyes around the room you’re sitting in, your eyes will get very different pattern