Skip to main content

Lab #2 - Image formation in a pinhole camera

Lab #2 - Image formation in a pinhole camera

To help you understand how your eye helps you organize the patterns of light available in the environment, the exercises in Lab #2 involve building a simple model of the eye called a pinhole camera. By making this model, you'll be able to observe how image formation happens in an eye and how you can manipulate different qualities of the image by changing features of the model.

You can find the document describing the lab exercises at the following link:

https://drive.google.com/file/d/1hlqnJT5pdG6A6n2dc6fYI4eJ5gLBiPbU/view


When you're done, move on to the next post so we can discuss what you see with a pinhole camera and what it tells us about how real eyes work.

Comments

Popular posts from this blog

Monocular cues for depth perception

Monocular cues for depth perception In our last post, we discussed how you can use the information from your two eyes to estimate the relative depth of objects in the visual field. Both vergence and binocular disparity provided cues to where objects were situated in depth relative some fixation point, allowing us to obtain some information about the 3D arrangement of objects in space. Clearly, two eyes are helpful in resolving the ambiguity that follows from projecting the light coming from a three-dimensional scene onto a two-dimensional surface. We said that this projection of light onto the retina made it much harder to make good guesses about depth from one image, and that using two images was a necessary step towards making it possible for you to recover this information from the retinal data. However, consider the picture below:   Figure 1 - A boid in some trees. Some things here look closer to you than others, but how do you tell that when you don't have bino...

Relative Depth from Binocular Vision

Relative Depth from Binocular Vision Discussing the phenomenology of color constancy led us to consider something new about visual perception. Besides simply measuring patterns of light with the various parts of your visual system, you also use those patterns of light to make inferences about the light sources, objects and surfaces in the world that produced them. In the case of color constancy, this meant thinking about how we might observe the pattern of light coming from an object or surface and use that to make a guess about its reflectance and the illumination shining on it. The former property should be a stable property of the object, while the latter might change under different conditions. We saw that there were techniques for estimating the reflectance of an object, but these relied on making assumptions about the world that might not always be true. In general, this is going to be the case as we continue to think about how we can recover properties of the world from pa...

Visual Search - What makes it hard to find things?

Visual Search - What makes it hard to find things? For our last post (at least, I think it is), we’re going to discuss another problem in high-level vision: visual search. By visual search , I mean more or less what you probably think: The problem of searching for something in a cluttered display. For example, where is “Waldo” in the image below? Figure 1  - Finding an object in clutter can be challenging. "Where's Waldo?" books play with search difficulty by manipulating a number of properties of search displays. Naively, you might think that a problem like this more or less boils down to carrying out your procedures for object recognition a bunch of times. To look for Waldo (or your keys, or a particular street corner on a map), don’t you just have to look around a bunch within the scene and try to recognize him as you go? To some extent, yes. However, there are several ways in which visual search seems to have different properties than we’d expect if w...