Skip to main content

Posts

Showing posts from February 17, 2019

Logic gates, complex cells and perceptual constancy

Logic gates, complex cells and perceptual constancy In the last post, we demonstrated how we could use the information from a group of V1 cells (a population code ) to make a good guess about the orientation of a line or edge that those cells were responding to. This allowed us to solve some problems related to how to measure values for some image feature (in this case, orientation), using a relatively small number of sensors. The use of a population code allowed us to measure orientation fairly precisely even if we only had a few cells that had a few different preferred orientations, and it also allowed us to make predictions about what would happen if some of those cells changed the way they were responding to a pattern due to adaptation . At this point, I’d say we have a decent grasp of how to measure some simple spatial features in images: We know how to encode wavelength information with photoreceptors, we know how to measure local increments and decrements of light with cells ...

Population coding, adaptation, and aftereffects

Population coding, adaptation, and aftereffects In the last post, we arrived in primary visual cortex (or V1) and discovered that the cells there tended to respond to edges and lines titled at different angles. Specifically, one cell might respond a lot to vertical lines, and respond less and less to lines that were tilted away from vertical . We called that pattern of responses as a function of edge orientation a tuning curve , and we said that different V1 cells would have different preferences for orientation: Some cells would like a vertical line best, others a horizontal line, and still others might like something in the middle. (Figure 1) Figure 1  - You have many different cells in V1 with different orientation preferences. We said that these cells were useful because they could start to tell us more about the actual shape of a boundary around some object or surface in an image. Here’s a question, though: If you have a bunch of these cells hanging out in V1, h...

Spatial vision in V1

Spatial vision in V1 In the lateral geniculate nucleus (or LGN), we saw the beginnings of what we’ll call spatial vision , or the measurement of patterns of light by the visual system. Compared to what the photoreceptors do, LGN cells (and their predecessors in the visual system, the retinal ganglion cells) respond best when light is in specific arrangements. Within the receptive field of an LGN cell, there are both excitatory and inhibitory regions arranged into a center-surround structure. This ends up meaning that LGN cells respond most vigorously when there is either a spot of light surrounded by a darker region (an on-center cell) or a darker spot surrounded by light (an off-center cell). In the magnocellular layer, these cells only measure light/dark contrast. In the parvocellular layer, the excitatory and inhibitory regions are also wavelength-specific: An LGN cell may respond best when there is red light in the surround and green light in the center, for example. These ...