Skip to main content

Lab #3 - Photopigments

Lab #3 - Photopigments

Our next task is to work out how you translate the image formed in the back of a pinhole camera into some kind of signal that your nervous system can work with. We'll start addressing this question by examining photopigments in Lab #3. To complete this lab, you'll need access to some sunprint paper, which is available from a variety of different sources. Here's where I bought mine: http://www.sunprints.org.

You can find the lab documents at the link below:

https://drive.google.com/file/d/17MVZqvyiCRdT_Qu5n_CtK3rVcUP0zoOG/view

When you're done, move on to the Lab #4 post to make a few more observations that will give us a little more information about the retina. Afterwards, we'll try to put all of this together into a more comprehensive description of what's happening at the back of the eye.


Comments

Popular posts from this blog

Monocular cues for depth perception

Monocular cues for depth perception In our last post, we discussed how you can use the information from your two eyes to estimate the relative depth of objects in the visual field. Both vergence and binocular disparity provided cues to where objects were situated in depth relative some fixation point, allowing us to obtain some information about the 3D arrangement of objects in space. Clearly, two eyes are helpful in resolving the ambiguity that follows from projecting the light coming from a three-dimensional scene onto a two-dimensional surface. We said that this projection of light onto the retina made it much harder to make good guesses about depth from one image, and that using two images was a necessary step towards making it possible for you to recover this information from the retinal data. However, consider the picture below:   Figure 1 - A boid in some trees. Some things here look closer to you than others, but how do you tell that when you don't have bino...

Relative Depth from Binocular Vision

Relative Depth from Binocular Vision Discussing the phenomenology of color constancy led us to consider something new about visual perception. Besides simply measuring patterns of light with the various parts of your visual system, you also use those patterns of light to make inferences about the light sources, objects and surfaces in the world that produced them. In the case of color constancy, this meant thinking about how we might observe the pattern of light coming from an object or surface and use that to make a guess about its reflectance and the illumination shining on it. The former property should be a stable property of the object, while the latter might change under different conditions. We saw that there were techniques for estimating the reflectance of an object, but these relied on making assumptions about the world that might not always be true. In general, this is going to be the case as we continue to think about how we can recover properties of the world from pa...

Visual Search - What makes it hard to find things?

Visual Search - What makes it hard to find things? For our last post (at least, I think it is), we’re going to discuss another problem in high-level vision: visual search. By visual search , I mean more or less what you probably think: The problem of searching for something in a cluttered display. For example, where is “Waldo” in the image below? Figure 1  - Finding an object in clutter can be challenging. "Where's Waldo?" books play with search difficulty by manipulating a number of properties of search displays. Naively, you might think that a problem like this more or less boils down to carrying out your procedures for object recognition a bunch of times. To look for Waldo (or your keys, or a particular street corner on a map), don’t you just have to look around a bunch within the scene and try to recognize him as you go? To some extent, yes. However, there are several ways in which visual search seems to have different properties than we’d expect if w...