Logic gates, complex cells and perceptual constancy In the last post, we demonstrated how we could use the information from a group of V1 cells (a population code ) to make a good guess about the orientation of a line or edge that those cells were responding to. This allowed us to solve some problems related to how to measure values for some image feature (in this case, orientation), using a relatively small number of sensors. The use of a population code allowed us to measure orientation fairly precisely even if we only had a few cells that had a few different preferred orientations, and it also allowed us to make predictions about what would happen if some of those cells changed the way they were responding to a pattern due to adaptation . At this point, I’d say we have a decent grasp of how to measure some simple spatial features in images: We know how to encode wavelength information with photoreceptors, we know how to measure local increments and decrements of light with cells ...