Skip to main content

Modeling RGCs and the LGN

Modeling RGCs and the LGN
To describe what cells in the retinal ganglion layer or the LGN respond to in more detail, we’re going to use results from single-unit recordings of those cells to guide us. This refers to a technique in which we place a small electrode into a cell that we’re interested in so that we can measure the changes in electric potential that correspond to the action potentials that most neurons produce when they send signals to other cells in the nervous system. If you don’t know what an action potential is, don’t panic: All I really need you to know about them is the following:

1)   When a cell is just kind of hanging out and doing nothing in particular, it produces action potentials (or “spikes”) every so often at a rate that we’ll call the base rate.
2)   Sometimes, a change in stimulation can make a cell fire more than the base rate.
3)   Sometimes, a change in stimulation can make a cell fire less than the base rate.

We’d really like to understand what circumstances make (2) and (3) happen to understand what a cell is doing. In the case of the LGN, that means our plan will be to put an electrode in an LGN cell, display some images, and try to work out when we see either an increase or a decrease in the rate at which action potentials are being produced by that cell. Remember, if we were talking about photoreceptors, we’d expect that changing the wavelength of light would change how much the cell produced a signal (though photoreceptors produce a different kind of signal than LGN cells – don’t worry about this for now, but it’s worth saying it, I suppose). What kinds of inputs will change what an LGN cell or an RGC cell is doing? It’s hard to guess what will happen on your own, I think, so go check out the video at the link below to see for yourself:


What you should see in the video is a few different important things. First, something that’s not terribly evident in the footage from this cell is a feature of LGN cells that we really have to point out: Each LGN cell (and each photoreceptor, and each RGC!) has some portion of visual space that it receives input from. That is, there are parts of the visual environment where changes in light can influence the cell’s response, and changes in light that happen anywhere else won’t do anything at all. We call this portion of visual space that a cell is “looking at” a receptive field, and each cell (more or less) has a different one. Some LGN cells only “look” at a tiny part of visual space right at the center of your vision, for example, while others might look at a larger chunk of the world somewhere in the periphery. This video starts with a receptive field that’s already been identified so that the experimenter knows where to put light so that something will happen with this cell – the question is, what exactly happens next?


Figure 1 - Any one cell in the RGC or the LGN will only change its behavior based on the pattern of light inside a small portion of the visual field. This part of the visual field is called the receptive field of the cell.

What I hope is evident here is that there are some places where you can put light within this receptive field that make the cell produce more action potentials (or “fire” more). For example, light that appears right in the center of the display seems to lead to more firing. We’ll call these parts of the receptive field excitatory regions to reflect the fact that the cell is responding more when light lands here. You should have also noticed that there were places where you could put light that seemed to make the cell fire less. We’ll call these inhibitory regions to reflect the fact that light placed in these parts of the receptive field quiets down, or inhibits, the responses of the cell. By using this kind of single-unit recording, we can make a sort of map of the receptive field that tells us where the excitatory and the inhibitory regions are: (Figure 2)


Figure 2 - Some regions of a receptive field are excitatory, which means that light in these regions will increase the cell's firing rate. Other regions are inhibitory, which means that light in these regions will decrease the cell's firing rate. (image from http://miladh.github.io/lgn-simulator/doc/recepfield.html).

What we’ll see if we look in the magnocellular layers of the LGN is that cells have one of two arrangements of these excitatory and inhibitory regions. (1) Cells may have an excitatory region in the center, with an inhibitory region surrounding it, or (2) Cells may have an inhibitory region in the center, with an excitatory region surrounding it. In both cases, we’ll refer to this layout as a center-surround structure to reflect the fact that the central part of the receptive field does something different from the surrounding portion. As for the two different kinds of cells, we’ll differentiate them by calling the first kind an on-center cell and the second kind an off-center cell.

These maps are sort of neat for a couple reasons. First, compared to the photoreceptors, it turns out that so far it doesn’t matter what wavelengths of light we show to these cells, but it matters a lot what the spatial layout of the light is. That’s quite different from the story with the rods and cones and probably reflects something important about what these cells are doing that contributes something new to our vision. They’re also neat because they help us make some predictions about what will happen if we show this cell different patterns of light. A small, bright dot? That’ll make our on-center cell fire like mad, but will quiet down the off-center cell. A bright donut of light? The exact opposite. A big disc of light that fills up the whole receptive field? That might make both cells keep doing whatever they were doing because the excitatory and inhibitory responses may just cancel out.

But what about other kinds of input? Later in that video, we see a small bar being moved across the receptive field of the LGN cell, accompanied by some firing. Was that better or worse than the simple patterns we described above? What if we did something even more complicated? What will the cell do? What we’d like to come up with is a model that helps us understand how to translate between a pattern of light and the response that an LGN cell makes, much like we did for photoreceptors. To do this, we need to follow the same recipe that we did a few lessons ago: (1) Describe the input precisely, (2) Describe what a cell does precisely, (3) Find a rule for combining those two descriptions to yield a response.

To address the first step in that recipe, we need to develop a language for talking about the spatial layout of light in images. Thankfully, we’re going to use a language that you’re probably familiar with: We’re going to talk about patterns of light as arrays of pixels. The word “pixel” stands for “picture element” and refers to a small region of an image – usually a small square or rectangle – that has some color inside it. If you’ve ever played with Perler beads or completed a paint-by-numbers pattern, you should have a good sense for what this looks like – an image is divided up into a grid of squares, and each square is set to some value or color so that the grid of squares makes a decent approximation of the entire image. Depending on how many colors you have and how big your grid squares are, you can make some pretty cool patterns (Figure 3).


Figure 3 - A portrait of Abe Lincoln made with grayscale Perler beads.

We can’t just stop with colors, though – we need some numbers to make this work. Like the list of numbers we created to describe lights at different wavelengths, we’ll make a list of numbers arranged in this same kind of grid to describe these images made up of pixels. For now, let’s agree that we’re leaving color out of the picture and only working with grayscale images (and thus, magnocellular LGN cells!). The numbers we’ll put in each square will correspond to how bright or dark that square should be: A large number (say, 200 units) will mean that the square is bright white. A small number (say, 0 units) will mean that the square is black. Any picture we care to describe can thus be translated into an array of numbers, meaning that we have one of the main things we need to calculate with.

Figure 4 - We can assign a value to each pixel based on the intensity of light in that region. Large values are closer to white, smaller values are closer to black.

Now we have to play the same kind of trick for our LGN cells. When we did this for the photoreceptors, it was important that we made a similar kind of list to describe what a rod or cone did as we used to describe what was in the light that it might encounter. That’s going to be true here as well – we’re going to imagine that the LGN cell’s receptive field is also divided up into pixel-like regions and the number of pixels we use will be determined by the size of the cell’s receptive field. But what goes in those cells? We know that some of these pixels are in excitatory regions and others are in inhibitory regions, so let’s put +’s in the excitatory parts and –‘s in the inhibitory parts. We want these to be numbers, though, so let’s make them +1’s and -1’s. (Figure 5).


Figure 5 - To describe what an LGN cell does, we assign positive numbers to excitatory regions and negative numbers to inhibitory regions.

Not bad. We have a quantitative language for images and a quantitative language for the LGN cell’s receptive field. Now, how do we put the two together? I’m going to argue that we’re in a very similar situation in the LGN as we were in the retina: We’re interested in something like the total amount of stimulation that the cell is receiving, which is going to mean combining all the different types of inputs that it can get. For rods and cones, this meant considering all the different wavelengths of light that could be shining on them. In the LGN it means considering all the different places (or pixels) where light could be arriving. To get started, think just about the very center of an on-center LGN cell: If light lands there, it will make the cell fire more. If the light is brighter, it will make the cell fire even more still. That is, we’ll be adding to the overall response if there’s more light here. What about those inhibitory cells? Quite the opposite: Light there will reduce the response, and more light will mean more reduction. We’ll have to subtract from the total depending on what happens here. All this suggests a simple rule that we’ve seen before – compute a dot-product between the image and the LGN receptive field values. (Figure 6)


Figure 6 - To describe the response of a cell to a part of an image, we take a dot product between the array of numbers in the image and the array of numbers in the cell's receptive field.

Just like we did with light spectra and absorption spectra in the retina, we pair up corresponding item, multiply members of the pair together, then sum up all the products. The result is a single number that reflects how much that LGN cell will respond to that image. For now, we’re not going to put units on this – instead, we’ll be interested in thinking about relative differences in these values as a way of ranking stimuli according to the responses they produce. Still, this is a quantitative way to describe what these cells are doing that allows us to make predictions based on patterns of light and the structure of receptive fields.

There is one very boring little change we have to make, however, and it is both boring and technical and kind of important. Remember our reasoning about simple patterns of light and their impact on the LGN cell’s response? We mentioned that light that filled the whole field might make the cell do very little because the inhibitory and excitatory regions might cancel out. As we’ve written down the numbers for this cell’s receptive field, that won’t happen – we have a LOT of -1’s and just one +1, so the nays will have it. To fix this, here is a rule I want you to remember: The positive and negative values in a receptive field should add up to zero. This means changing our first pass at an LGN cell as follows:


Figure 7 - We change the numbers we assign to excitatory regions of a cell's receptive field so that positive values cancel out negative values when we add up all the numbers. This ensures that the cell's net response to a uniform light across its entire receptive field is zero.

I don’t want you to worry too much about exactly why we’re doing this, but it will keep some book-keeping easier for now.  The big deal is that we can start calculating stuff and using that to explain or predict what we see. In particular, this model suggests something really simple: On-center cells signal luminance increments (places were the image gets brighter) and off-center cells signal luminance decrements (places where the image gets darker). LGN cells signal local contrast – what’s changing spatially in a picture in terms of brightness? A large on-center response means that there’s something a little brighter here, while a large off-center response means that there’s something darker.

I want to put these predictions (and that analogy about detecting brightness and darkness) to the test with a simple pattern. This is a pattern called the Hermann grid, and it’s just made up of a bunch of dark squares on a white background. So far, so boring. However, with our model of LGN cells in hand, there are some neat surprises for us here. Consider, first of all, what an on-center LGN cell will do if one of those black squares fills up the whole excitatory region: Probably not much, right? This is exactly what this cell doesn’t want to see. The gaps between the squares, though – here’s where there’s a little more for an on-center cell to get excited about. Imagine putting one down right at the intersection of vertical and horizontal white lines: Now there’s some bright light in the excitatory center, which is good, but also some light in the inhibitory surround, which is less good. Still, the cell might produce some response here to signal a luminance increment. Slide over a bit to a spot that’s not at the crossroads though, and check out what happens: Now the center is equally happy with all that white light, but the surround is even happier – there’s less bright light in the inhibitory regions? This on-center cell should fire more than the one at the crossroads! And what does that mean? It should mean that this spot looks a little brighter than the crossroads – or to put it another way, the crossroads should look a little darker (or grayer) than the spaces in between the squares. Take a look at the pattern on the left below and see what you think.

Figure 8  - ON-center cells respond less at the crossroads of the white lines at the right than they do at points between black squares. This leads to grey dots at those intersections that reflect that lower response. OFF-center cells do the same thing in the pattern on the right, leading to fuzzy light-colored dots.


Do you see little gray dots at the crossroads? They’re not really there, but your LGN is sure signaling that they are? It’s also doing so in a way that depends on the size of the receptive fields for those different cells. Move closer to the image and the gray dot you’re looking right at will disappear, but gray dots at crossroads in the periphery will persist or emerge. These calculations really do mean something: By computing what cells at these stages are doing because of their receptive field structure, we can explain how what you see is a function of what your visual system calculates, NOT what is physically there.

Comments

Popular posts from this blog

Motion perception and sampling in space and time

Motion perception and sampling in space and time The next property of real objects that we want to be able to recover from images involves introducing a feature of your experience that we haven’t considered before: change over time. Besides having reflectance properties that are independent of the light falling on them and occupying positions in 3D space, objects also change the way they look over time. That is to say, objects can move . One of those objects that can move is you, which is another way that the images you receive on your retina change from moment to moment. We’d probably like to know something about the real movements that are happening out there in the world that give rise to the changing images that we measure with our eyes and our brain, so how do we interpret change over time in a way that allows us to make guesses about motion? To motivate this discussion, I want to start thinking about this problem by considering a simple model system that represents a basi

Visual processing in the retinal ganglion cells and the LGN

Visual processing in the retinal ganglion cells and the LGN To continue discussing how your vision works, we’re going to have to abandon structures that are relatively easy to see (like your retina – which is tricky, but not impossible, to see directly) and start talking about parts of your visual system that aren’t so accessible. Our next stop will be cells in two different locations: We’ll consider cells called retinal ganglion cells (or RGCs) and cells within a structure called the lateral geniculate nucleus (or LGN). We’ll be talking about these cells together because it turns out that they have very similar computational properties even though they’re located in different parts of your visual system. The retinal ganglion cells are located in a layer just “above” your photoreceptors if you’re looking at a cross-section of your eye, and receive inputs directly from the rods and cones. The lateral geniculate nucleus is a good bit further along – the retinal ganglion cells send p

What does light do?

What does light do? In the first set of lab exercises, you should have been able to observe a number of different behaviors that light can exhibit. What do these different behaviors tell us about the nature of light, and do they provide any insights regarding what exactly is different about lights that are different colors? Let’s talk a little bit about what exactly light did in each of these different situations and what we can conclude about the nature of light as a result. Reflection This set of observations probably didn’t hold too many surprises for you, but we should still talk carefully about what we saw. First of all, what did light do when it encountered a mirror? When we measure the various angles that light makes relative to the surface of a mirror on the way in and the way out, we should find that the incident angles of the light are equal (see Figure 1). Figure 1 - When light reflects, it makes equal incident angles relative to the surface. By itself,