Skip to main content

Visual processing in the retinal ganglion cells and the LGN

Visual processing in the retinal ganglion cells and the LGN
To continue discussing how your vision works, we’re going to have to abandon structures that are relatively easy to see (like your retina – which is tricky, but not impossible, to see directly) and start talking about parts of your visual system that aren’t so accessible. Our next stop will be cells in two different locations: We’ll consider cells called retinal ganglion cells (or RGCs) and cells within a structure called the lateral geniculate nucleus (or LGN). We’ll be talking about these cells together because it turns out that they have very similar computational properties even though they’re located in different parts of your visual system. The retinal ganglion cells are located in a layer just “above” your photoreceptors if you’re looking at a cross-section of your eye, and receive inputs directly from the rods and cones. The lateral geniculate nucleus is a good bit further along – the retinal ganglion cells send projections out of the eye towards the LGN along a path that we’ll talk about in a moment. First, however, I want to say a little bit about what we can learn about the RGCs just by considering their anatomy, including observations we can make about the cells themselves and observations about the manner in which the photoreceptors connect to them.


Figure 1 - An anatomical look at the RGCs in relationship to the rods and cones. By Henry Vandyke Carter - Henry Gray (1918) Anatomy of the Human Body (See "Book" section below)Bartleby.com: Gray's Anatomy, Plate 882, Public Domain, https://commons.wikimedia.org/w/index.php?curid=566821

The first thing that’s worth noting about the RGCs is that there turn out to be far fewer of them than there are photoreceptors. You have something like 130 million photoreceptors in each eye, but you only have something like 1 million RGCs. That’s sort of remarkable. It means that if we’re thinking about the way photoreceptors connect to RGCs to send information about the visual world along, it must be the case that multiple photoreceptors send information to the same RGC a good bit of the time. That is indeed the case, and we’ll call that many-to-one connection convergence to reflect the fact that multiple photoreceptors’ inputs converge, or meet, at the same RGC. If we take a look at how much convergence there is from photoreceptors to RGCs, we’ll see that it varies a good bit depending on where we’re looking in the retina. If we look at connections between photoreceptors right at the center of your retina (or the fovea), we’ll find that there are some instances where a very small number of photoreceptors, maybe 5 or fewer, connect to a single RGC. If we look at connections further out in the periphery of your retina, we’ll find that there’s a much higher degree of convergence – perhaps 100 or more photoreceptors all sending their inputs to the same RGC. You may remember that your cones and rods are distributed unevenly across your retina (cones in the center, rods in the periphery), so what I’m really saying is that it seems like cones and rods have different degrees of convergence as they send information on to the retinal ganglion layer. For now, I just want you to remember this, but we’ll return to this fact later.
Something else we’ll notice about the retinal ganglion cells if we’re taking a close anatomical look at them is that they come in different sizes. In particular, across the surface of the retina, you’ll find that RGCs get bigger as we move from the fovea to the periphery. Besides that general trend, you’ll also find that there are small RGCs and markedly larger RGCs at each eccentricity. We’ll call the small RGCs midget cells to reflect their size, and we’ll call the larger ones parasol cells to reflect their larger size. Both of these anatomical facts are sort of interesting, because they give us some anatomical hints at how visual function may be divided up: Just like looking closely at photoreceptors suggested that there might be rod vision and cone vision, here it seems like perhaps there’s something like midget vision and parasol vision based on the different sizes of cells in these locations, and possibly the type of connections they make with photoreceptors.
Figure 2 - Midget (left) and parasol (right) RGCs. By Stromdabomb - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=58319012

From here, I want to leave the retinal ganglion layer in favor of talking more about the anatomy of how information continues to move through your visual system. We’ve sort of exhausted what we can say about the RGCs just by looking at the cells there and the connections they make with the photoreceptors, so I’d like to move on by considering what the RGCs send information to and how they do it. This means moving onward to the lateral geniculate nucleus, or LGN.
First, how do we get to the LGN? That is, how do the cells we’ve just talked about in the retinal ganglion layer connect to the cells in this next part of the visual system? There are a lot of interesting things to be said about this, which tell us a good bit about how information is being re-packaged for this next stage in visual processing. The most conspicuous feature of the connectivity between the RGCs and the LGN is how information arrives at the left and right LGN from the left and right retinal ganglion layers. Remember, the left and right retinal ganglion layers are in the left and right eyes, so you might think that the left and right LGN simply receive inputs from the corresponding RGC layers. It’s more complicated than that, however, and represents a key re-mapping of the visual world into the brain. The left and right LGN receive input from the right and left half of visual space, respectively, through the projections drawn out in the figure below.

Figure 3 - The left and right LGN receive input from the right and left sides of visual space.
This set of projections means that each LGN receives input from each eye: Both the left and right eye have a view of the right half of visual space, so to get information from that right half of the environment to the left LGN, we have to package up information from both eyes and send that information to the same place. This means that one set of connections crosses over, or decussates, at a spot called the optic chiasm so that it arrives at the right place. The other set of connections does not do this, however, making for a more interesting set of ipsilateral and contralateral projections between the eyes and the LGN. Now that we’ve arrived at the LGN from the eye, there’s also a good bit more to see about how these connections work. To say more about this, we have to take note of the layers we can see in the LGN if we look at it in cross-section. There are six distinct levels to the LGN that we can see and these layers also turn out to be an important part of the re-packaging of visual information from the retinal ganglion layer to the rest of the visual system. Specifically, the midget and parasol cells we described in the RGCs turn out to send their projections to specific parts of the LGN: Midget cells connect to the four layers at the top of the LGN, where relatively small cells live; Parasol cells connect to the remaining two layers at the bottom of the LGN where larger cells live. Because those larger LGN cells are in the bottom two layers, we’ll call these the magnocellular layers of the LGN, and the superficial layers will be called the parvocellular layers.

Figure 4 - The LGN has a layered structure such that midget RGCs project to the parvocellular layers, and the parasol RGCs project to the magnocellular layers. By Jimhutchins - The original image was uploaded on en.wikipedia as en:Image:Lateral_geniculate_nucleus.png, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4642465

Finally, there’s something else we can say about the layers of the LGN and where they get their information from. Remember how I said that each LGN received input from both eyes? It turns out that different layers of the LGN receive input from different eyes, meaning that depending on where you are in the LGN, cells either receive inputs from the midget or parasol RGCs in either the left or the right eye. This is a rather nice arrangement in which a lot of different kinds of visual information are being sorted and bundled up in an orderly way. Again, we continue to see a distinction between two kinds of vision - One that seems like it depends heavily on foveal vision, cones, midget RGCs, and now parvocellular LGN cells, and another that seems to depend more on peripheral vision, rods, parasol RGCs, and magnocellular LGN cells. Still, we haven’t answered an important question about the RGCs and the LGN cells that we went to some trouble to address regarding the photoreceptors: What kinds of visual inputs do these cells respond to? In the case of the rods and cones, it turned out that different photoreceptors responded to different kinds of light. Specifically, different wavelengths of light were more or less able to make a particular photoreceptor respond. What’s the story with these new cells that we’ve encountered?
To start answering this question, we’re going to introduce a technique that will help us understand what’s happening in many subsequent parts of the visual system. We’ll see what happens to your ability to respond to different aspects of visual appearance if we lesion or damage some specific part of your visual system. The idea is that if we remove some part of your visual system and this leaves you unable to see some specific thing, that part of your visual system must have been helping you to see that information. So here’s the plan: To find out what the parvocellular and magnocellular layers of the LGN are doing for you, we’re going to get you to perform a simple visual task that relies either on color, on motion, or on depth, and see how well you do that task after we’ve damaged your magnocellular or parvocellular layers. Actually, I don’t mean you – I mean a non-human primate, like a macaque. Also, I don’t really mean that we’re going to damage the macaque’s LGN. Instead I mean that I’m going to tell you what happened when Dr. Peter Schiller carried out a version of this experiment.
Dr. Schiller trained monkeys to look at a small cross positioned in the center of a ring of discs, and keep looking at that cross until something in the ring of discs changed. Specifically, at some point one of those discs was going to look different from the others, and the monkey’s job was to go look at the one that was different. If the monkey did this correctly, some kind of reward (probably juice or a raisin) would follow. The key manipulations in this experiment were two-fold: 1) The disc that changed could either look different because it was a different color, because it was moving in a different direction than the other discs, or because it was at a different depth than the others (floating closer to the monkey, for example). 2) The monkey could be trying to do this task without magnocellular layers or without parvocellular layers. The question was whether missing one set of layers or the other tended to make the task harder to do in some circumstances. The nutshell version of the results is this: Missing your parvocellular layers makes you bad at noticing a disc of a different color, while missing your magnocellular layers makes you bad at noticing a disc with different motion. This study thus demonstrates that there seems to be different visual information bundled up in these sets of LGN layers, which starts to give us some hints about what kinds of visual inputs might makes these cells respond. The parvocellular layers seem to provide us with the kind of information that a nice postcard provides: A colorful, detailed image that doesn’t change over time. By comparison, the  magnocellular layers provide us with something different: A black-and-white image that may not be as detailed, but that can change and move over time. If the parvocellular layers are kind of like a postcard, the magnocellular layers are a bit like a very old television.

Figure 5 - The parvocellular cells provide colorful, detailed, static representations of appearance. The magnocellular layers provide grayscale, blurrier, but dynamic representations of appearance.
To be more precise, however, we’re going to have to do something a little more fine-grained than damaging parvocellular or magnocellular layers of the LGN to see what happens. Instead, we’re going to have to look at individual cells in the LGN (or the RGCs) and try to figure out what you have to be looking at to make those cells produce some kind of a response. We’ll talk about that process in the next post and develop a model of that relationship between the visual world and the response of LGN cells at the same time.


Comments

Popular posts from this blog

What does light do?

What does light do? In the first set of lab exercises, you should have been able to observe a number of different behaviors that light can exhibit. What do these different behaviors tell us about the nature of light, and do they provide any insights regarding what exactly is different about lights that are different colors? Let’s talk a little bit about what exactly light did in each of these different situations and what we can conclude about the nature of light as a result. Reflection This set of observations probably didn’t hold too many surprises for you, but we should still talk carefully about what we saw. First of all, what did light do when it encountered a mirror? When we measure the various angles that light makes relative to the surface of a mirror on the way in and the way out, we should find that the incident angles of the light are equal (see Figure 1). Figure 1 - When light reflects, it makes equal incident angles relative to the surface. By itself,

Motion perception and sampling in space and time

Motion perception and sampling in space and time The next property of real objects that we want to be able to recover from images involves introducing a feature of your experience that we haven’t considered before: change over time. Besides having reflectance properties that are independent of the light falling on them and occupying positions in 3D space, objects also change the way they look over time. That is to say, objects can move . One of those objects that can move is you, which is another way that the images you receive on your retina change from moment to moment. We’d probably like to know something about the real movements that are happening out there in the world that give rise to the changing images that we measure with our eyes and our brain, so how do we interpret change over time in a way that allows us to make guesses about motion? To motivate this discussion, I want to start thinking about this problem by considering a simple model system that represents a basi