CMPSCI 670: Computer Vision, Fall 2014

Homework 2: Photometric Stereo

Due date: October 6 (before the class starts)

Downloads: code.zip (updated 9/28 9/29), data.zip

There was a typo in the evalCode.m file in code.zip. Download the latest one from the above link.


Instructions

The goal of this assignment is to implement shape from shading as described in Lecture 5. This is also described in shape from shading section (Sec 2.2) in Forsyth and Ponce book (pdf link for this section).
  1. Download the code.zip and data.zip from the links on top of the page. The data directory consists of 64 images each of four subjects from the Yale Face database. The light source directions are encoded in the file names. The code consists of several .m functions. Your task will be to add some code to the top-level script, evalCode.m, and to fill in the code in prepareImage.m, photometricStereo.m, getSurface.m, as explained below. The remaining files are utilities to load the input data and display the output.

    Hint: You can start by setting the subjectName='debug' which creates images from a toy scene. You can debug your code against this before you try the faces.

  2. For each subject (subdirectory in data), read in the images and light source directions. This is accomplished by the function loadFaceImages.m provided in the zip file (which, in turn, calls getpgmraw.m to read the PGM files). loadFaceImages returns the images for the 64 light source directions and an ambient image (i.e., image taken with all the light sources turned off).

  3. Preprocess the data: subtract the ambient image from each image in the light source stack, set any negative values to zero, rescale the resulting intensities to between 0 and 1 (they are originally between 0 and 255).

    Hint: these operations can be done without using any loops whatsoever. You may want to look into MATLAB's bsxfun function.

  4. Estimate the albedo and surface normals. For this, you need to fill in code in photometricStereo.m, which is a function taking as input the image stack corresponding to the different light source directions and the matrix of of the light source directions, and returning an albedo image and surface normal estimates. The latter should be stored in a three-dimensional matrix. That is, if your original image dimensions are h x w, the surface normal matrix should be h x w x 3, where the third dimension corresponds to the x-, y-, and z-components of the normals. To solve for the albedo and the normals, you will need to set up a linear system as shown in slide 31 of Lecture 5.

    Hints:
    • To get the least-squares solution of a linear system, use MATLAB's backslash operator. That is, the solution to Ax = b is given by x = A\b.
    • If you directly implement the formulation of slide 31 of the lecture, you will have to loop over every image pixel and separately solve a linear system in each iteration. There is a way to get all the solutions at once by stacking the unknown g vectors for every pixel into a 3 x npixels matrix and getting all the solutions with a single application of the backslash operator.
    • You will most likely need to reshape your data in various ways before and after solving the linear system. Useful MATLAB functions for this include reshape and cat.
    • You may also need to use element-wise operations. For example, for two equal-size matrices X and Y, X .* Y multiplies corresponding elements, and X.^2 squares every element. As before, bsxfun can also be a very useful function here.

  5. Compute the surface height map by integration. The method is shown in slide 34 of Lecture 5, except that instead of continuous integration of the partial derivatives over a path, you will simply be summing their discrete values. Your code implementing the integration should go in the getSurface.m file. As stated in the slide, to get the best results, you should compute integrals over multiple paths and average the results. You should implement the following variants of integration:
    1. Integrating first the rows, then the columns. That is, your path first goes along the same row as the pixel along the top, and then goes vertically down to the pixel. It is possible to implement this without nested loops using the cumsum function.
    2. Integrating first along the columns, then the rows.
    3. Average of the first two options.
    4. Average of multiple random paths. For this, it is fine to use nested loops. You should determine the number of paths experimentally.

  6. Display the results using functions displaOutput and plotSurfaceNormals included in the zip archive.

Extra Credit

On this assignment, there are not too many opportunities for "easy" extra credit. This said, here are some ideas for exploration: If you complete any work for extra credit, be sure to clearly mark that work in your report.


Grading checklist

You should turn in both your code and a report discussing your solution and results. For full credit, your report should include a section for each of the following questions:
  1. Briefly describe your implemented solution, focusing especially on the more "non-trivial" or interesting parts of the solution. What implementation choices did you make, and how did they affect the quality of the result and the speed of computation? What are some artifacts and/or limitations of your implementation, and what are possible reasons for them?

  2. Discuss the differences between the different integration methods you have implemented for #5 above. Specifically, you should choose one subject, display the outputs for all of a-d (be sure to choose viewpoints that make the differences especially visible), and discuss which method produces the best results and why. You should also compare the running times of the different approaches. For timing, you can use tic and toc functions. For the remaining subjects (see below), it is sufficient to simply show the output of your best method, and it is not necessary to give running times.

  3. For every subject, display your estimated albedo maps and screenshots of height maps (use displayOutput and plotSurfaceNormals). When inserting results images into your report, you should resize/compress them appropriately to keep the file size manageable -- but make sure that the correctness and quality of your output can be clearly and easily judged. For the 3D screenshots, be sure to choose a viewpoint that makes the structure as clear as possible (and/or feel free to include screenshots from multiple viewpoints). You will not receive credit for any results you have obtained, but failed to include directly in the report PDF file.

  4. Discuss how the Yale Face data violate the assumptions of the shape-from-shading method covered in the slides. What features of the data can contribute to errors in the results? Feel free to include specific input images to illustrate your points. Choose one subject and attempt to select a subset of all viewpoints that better match the assumptions of the method. Show your results for that subset and discuss whether you were able to get any improvement over a reconstruction computed from all the viewpoints.

Instructions for submitting the assignment

Create a hw2.zip file in your edlab accounts here

/courses/cs600/cs670/username/hw2.zip containing the following files:

Note that the result of unzipping the /courses/cs600/cs670/username/hw2.zip should result in files /courses/cs600/cs670/username/hw2/report.pdf, etc.

Also include additional code (e.g. for extra credit) and explain it in the report what each file does.


Acknowledgements

This homework is based on a similar one made by Lana Lazebnik at UIUC.