Skip to main content

Population coding, adaptation, and aftereffects

Population coding, adaptation, and aftereffects
In the last post, we arrived in primary visual cortex (or V1) and discovered that the cells there tended to respond to edges and lines titled at different angles. Specifically, one cell might respond a lot to vertical lines, and respond less and less to lines that were tilted away from vertical . We called that pattern of responses as a function of edge orientation a tuning curve, and we said that different V1 cells would have different preferences for orientation: Some cells would like a vertical line best, others a horizontal line, and still others might like something in the middle. (Figure 1)


Figure 1 - You have many different cells in V1 with different orientation preferences.

We said that these cells were useful because they could start to tell us more about the actual shape of a boundary around some object or surface in an image. Here’s a question, though: If you have a bunch of these cells hanging out in V1, how do you work out the exact orientation of a line or edge that you’re looking at? That is, if there’s a line tilted at 10 degrees away from vertical, how do you measure that number ’10 degrees’ with the cells in your primary visual cortex?

You might hope that you could do this by hoping that you have a cell tuned to 10 degrees. This might not seem like a bad idea at first. After all, didn’t I say that we have a bunch of cells with different orientation preferences? Maybe you’ve got a 10-degree cell lying around and you can wait for it to respond a bunch – once it does, you can assume that you’re looking at a 10-degree line, right? It turns out not so much. To see why, I want you to consider two problems, one that’s easy to understand and one that’s a bit trickier.

The first (and simpler one) is this: Maybe it doesn’t seem to bad to hope that you have a 10-degree cell in V1, but what if that line was at 5 degrees? 3 degrees? 72.25 degrees? If detecting a line with a particular orientation requires a cell with that specific orientation preference, you’re going to need a lot of cells! At the very least, it will put a serious limit on the granularity of measurements you can make. Measuring lines in 1-degree steps would mean keeping ~180 different groups of cells around, for example, and things get worse as you try to get more and more precise.

The second (and more complex) problem is this: Even if you had all those, we’re going to run into a problem related to the principle of univariance that limited what the rods could tell us about color vision. Remember that in the retina, rods couldn’t tell us about color because they mixed up the intensity of a light with the wavelength of the light into one number. We couldn’t go backwards from that single number to get the two different things that went into it. We have the same kind of problem here because edges aren’t just at different orientations, but they’ll also be at different contrasts as well. Consider a cell that likes vertical lines. If we show it a faint vertical line (one with a slightly dark gray in the middle and slightly light gray on either side), it will respond to this more weakly than if the middle was black and the sides were white. It will ALSO respond more weakly to a line that’s in sharp black-and-white and tilted a bit away from vertical! Just like rods can’t tell us about wavelength, a single V1 cell can’t tell us much about orientation! Any level of response could be the result of seeing its favorite thing in low contrast or one of its less-favorite things at higher contrast. This means that we can’t hope to get by with just one cell. The question is, how can we do better with more than one?

The idea we’re going to introduce here is called population coding because it involves using the responses of many different cells (a “population” of V1 neurons) to make a guess about what orientation we’re looking at right now. Specifically, we’re going to assume that we have a group of V1 cells with different orientation preferences that all respond to the same input image. Instead of just listening to one of these cells, we’re going to combine the responses of all of these cells to come up with an estimate of the orientation. This will allow us to make precise measurements of orientation even with cells that have very different preferences for different angles of line or edge. First, let’s think about the group of cells depicted below:

Figure 2 - A population of cells with different preferred orientations. A vertical bar will make the cell tuned to vertical orientations fire a lot, while others will fire less. (figure from Basic Vision, by Snowden, Thompson and Troscianko).

Each of these has a favorite orientation (spaced about 22.5 degrees apart) and a tuning curve of responses centered at that favorite angle. This means that if a line is tilted right at one of the cell’s favorite orientations, it will respond a lot while the others will respond less:

Figure 3 - The same population will fire differently if a 22.5-degree line is the stimulus. Now the cell that fires most is the cell with a 22.5-degree preference. Others fire less, shifting the "mass" of the response to the right of the spectrum. (figure from Basic Vision, by Snowden, Thompson and Troscianko).

Remember that it’s not enough to listen to the cell that likes this one best! However, I hope you can see that it helps a little to have the other cells responding, too. Think of it this way: Whatever the cells are seeing right now, it makes a 22.5-degree cell respond a lot, but makes a 0-degree and 45-degree cell respond less, and cells further away on the spectrum of orientation respond even less. All signs here point to this orientation being something at least close to the preferred orientation of the 22.5 degree cell, or else it’s buddies would be responding a lot more, right?

We need to actually formalize this reasoning a bit, though, so we can really calculate something. To do that, we’re going to need a different way to describe the response of each cell that allows us to combine them in a straightforward way. Specifically, we’re going to turn the response that a particular cell makes into a vector. If you haven’t worked with a vector before, you can picture it as an arrow that’s pointing in a specific direction with a specific magnitude or strength (Figure 4). I sort of like using wind as an analogy for vectors: The angle tells you which direction the wind is blowing, and the magnitude tells you how strong it is. The key feature of a vector that we need is that we have two numbers to work with instead of just one – that can mean either an angle and a magnitude OR we can use our trigonometric functions to turn that into a “rise” (y-value) and a “run” (x-value).


Figure 4 - We can describe the response of a V1 cell using a vector that has an orientation and a magnitude. These values can be turned into x- and y-values, too.

How do we turn the response of a cell into a vector? Each cell has a preferred orientation and will produce a response of some magnitude after being exposed to a particular line, so let’s use these to get these values. The cell’s preferred orientation will be the orientation of our vector, while the response it makes will be the magnitude. This means that our population of cells provides us with a population of vectors – one for each cell that makes a response to the input image. Our job is to find a way to combine them with one another to get one answer to our question, and I have a simple proposal: Let’s average together the x-values of each vector, and then average together the y-values of each vector. The result (or vector average) of doing this will be a new vector with a specific magnitude and orientation. That orientation will be our guess about the orientation of the line that we were looking at.


Figure 5 - By averaging the x-values and y-values of a group of vectors, we can create an average vector that has a new orientation and magnitude. 

Let’s see how this would work with a line that isn’t at the preferred orientation of any of our cells – let’s take 10 degrees or so from vertical, which none of these cells prefer. If we look at the distribution of responses, we can turn those into vectors, average the numbers and voila! We end up with a 10-degree vector that points in the direction of our real line. What’s even better about this trick is that it doesn’t matter if a line is low contrast or high contrast: The absolute responses of all the cells will be changed by the same amount, so the relative contributions of each vector stay about the same. Now we can listen to a group of cells to measure orientations that aren’t any cell’s favorite.

Figure 6 - A 10-degree line is no cell's preferred stimulus, but we can combine the vectors that describe each cell's response to obtain a vector pointing in the direction of 10 degrees of tilt. (figure from Basic Vision, by Snowden, Thompson and Troscianko).

Now that we have a mathematical tool for calculating what V1 cells can tell us about orientation, let’s put it to a bit of a test. We’re going to do that by using a tool that’s a very important means of examining what cells in the visual system are doing without sticking an electrode into them – a tool called adaptation. In an adaptation experiment, we try to “tire out” a cell or group of cells by asking it to keep responding for an extended period of time. After a while, cells that were responding a lot start to respond less and less because they more or less get worn out. The more work a cell is doing (the larger the response) the more tired they get from being asked to keep going. We can tire out cells in your visual system by looking at something for a long time, which will make those cells fire a lot at first if they prefer that pattern, but fire less and less as they keep being shown the same thing. This will make the responses of cells with different orientations start to look less like the picture at left, and more like the picture at right:

Figure 7 - Staring at a 22.5-degree line for a long time will make the cell tuned to that orientation tired. Other cells that respond to that line will also get tired, but not quite as worn out. (figure from Basic Vision, by Snowden, Thompson and Troscianko).

But wait! If we’ve changed those responses by tiring our some cells, what happens to the estimate that the population makes of a new line or edge? Our vectors will look different because some cells can’t respond as much as they used to. For example, if you stare at a 22.5-degree right-leaning line for a long time, look at how that population responds to something vertical: What used to be a bunch of vectors that averaged out to a vertical orientation now looks like bunch of vectors that will average out to something a little left of vertical. Wait – what does this mean? It means that we predict that looking at a right-leaning line for a long time will start to make a vertical line look like it’s leaning to the left. Another way to put this is to say that adapting to a rightward tilt will lead to an aftereffect of vertical lines having a leftward tilt.

Figure 8 - Because some cells are tired after looking at a rightward-leaning line, the response to a vertical line will indicate that the observer is looking at a line leaning slightly to the left. (figure from Basic Vision, by Snowden, Thompson and Troscianko).

Are we right about this? Try it for yourself: Look at the purple bar for about 20-30 seconds, and then move your eyes to the purple dot and see what the stripes on that side look like. If you managed to tire your V1 cells out enough, they should look tilted in the other direction.

Figure 9 - You can see the predicted effects of adapting a population code by staring at the purple bar at the right or a long time (30 seconds or so) and then looking at the purple dot at the left. The vertical  lines on the left should look like they're leaning in the opposite direction as their counterparts to the right. (figure from Basic Vision, by Snowden, Thompson and Troscianko).

Population codes are a real tool that your visual system uses to make reasonably precise measurements with a small number of unique sensors. Not only can we calculate how information from those sensors are combined to make good estimates, we can also use the same techniques to work out how changes in the population’s properties affect what you see in real images. Adaptation effects aren’t just limited to tilted lines, either – lots of other cells in your visual system adapt as you keep looking at something for a long time, and in each case we can use aftereffects to understand a lot about how populations encode different aspects of visual appearance.

Next, we’re going to consider another way to combine information from V1 cells to achieve a very different goal. Rather than trying to get good estimates of orientation from our V1 cells, we’re going to try to find ways to cope with some of the complexities of measuring something meaningful about objects that can change their appearance over time.



Comments

Popular posts from this blog

Visual processing in the retinal ganglion cells and the LGN

Visual processing in the retinal ganglion cells and the LGN To continue discussing how your vision works, we’re going to have to abandon structures that are relatively easy to see (like your retina – which is tricky, but not impossible, to see directly) and start talking about parts of your visual system that aren’t so accessible. Our next stop will be cells in two different locations: We’ll consider cells called retinal ganglion cells (or RGCs) and cells within a structure called the lateral geniculate nucleus (or LGN). We’ll be talking about these cells together because it turns out that they have very similar computational properties even though they’re located in different parts of your visual system. The retinal ganglion cells are located in a layer just “above” your photoreceptors if you’re looking at a cross-section of your eye, and receive inputs directly from the rods and cones. The lateral geniculate nucleus is a good bit further along – the retinal ganglion cells send p

What does light do?

What does light do? In the first set of lab exercises, you should have been able to observe a number of different behaviors that light can exhibit. What do these different behaviors tell us about the nature of light, and do they provide any insights regarding what exactly is different about lights that are different colors? Let’s talk a little bit about what exactly light did in each of these different situations and what we can conclude about the nature of light as a result. Reflection This set of observations probably didn’t hold too many surprises for you, but we should still talk carefully about what we saw. First of all, what did light do when it encountered a mirror? When we measure the various angles that light makes relative to the surface of a mirror on the way in and the way out, we should find that the incident angles of the light are equal (see Figure 1). Figure 1 - When light reflects, it makes equal incident angles relative to the surface. By itself,

Motion perception and sampling in space and time

Motion perception and sampling in space and time The next property of real objects that we want to be able to recover from images involves introducing a feature of your experience that we haven’t considered before: change over time. Besides having reflectance properties that are independent of the light falling on them and occupying positions in 3D space, objects also change the way they look over time. That is to say, objects can move . One of those objects that can move is you, which is another way that the images you receive on your retina change from moment to moment. We’d probably like to know something about the real movements that are happening out there in the world that give rise to the changing images that we measure with our eyes and our brain, so how do we interpret change over time in a way that allows us to make guesses about motion? To motivate this discussion, I want to start thinking about this problem by considering a simple model system that represents a basi