Instructions for the Project
- Projects should be done groupwise - in the same groups as for your homeworks.
- All group members should have individual unique contributions to the project (which must be stated clearly in your final report), and yet, be aware of what the other group members did.
Note that this will be tested during the viva.
- Your project work will be a research paper implementation, or your own idea. In any case, if you wish to have some feedback about the triviality or difficulty of your project, please speak to me
- Your project work may contribute to your thesis, but the work you submit for this course must be done in this semester. It should be a separate deliverable.
- Project due date: week before finals (possibly extensible to week after finals). You will required to submit a report and appear for a viva during which you will demo your project, and answer questions about it.
The final report should clearly but briefly describe the problem statement, a description of the main algorithm(s) you implemented, a description of the datasets on which they were tested, a detailed description of the results followed by a conclusion including an analysis of the
good and bad aspects of your implementation or the algorithm.
- You may use MATLAB/C/C++/Java/Python + any packages (OpenCV,ITK, etc) for your project. But merely invoking calls to someones else's software is not substance enough. You should have your own non-trivial coding component. If software for the research paper you implement is already available, you should use it only for comparison sake - you will be expected to implement the paper on your own. Please discuss with me if you need any clarifications for your specific case.
- Due date for deciding the topic: 20th February. This submission will be in the form of a 0-mark assignment. You will be required to upload a brief document that lists the following details: the names and ID numbers of all group members, the specific paper(s) you will implement, the datasets on which you will try the algorithms from the paper(s), and the evaluation/validation strategy that you will adopt to test whether your implementations give correct/sensible results.
Please make sure you think of a good plan for the algorithms you will implement.
List of Project Topics
-
Philip H. S. Torr, Andrew Zisserman: MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Computer Vision and Image Understanding 78(1): 138-156 (2000)
- Gloria Haro, Antoni Buades, Jean-Michel Morel:
Photographing Paintings by Image Fusion. SIAM J. Imaging Sciences 5(3): 1055-1087 (2012)
- ftp://ftp.math.ucla.edu/pub/camreport/cam09-62.pdf ,Buades et al, "A note on multi-image denoising"
- Image mosaicing: we have done parts of this in our assignments, but you can try building further by referring to the paper below:
Matthew Brown, David G. Lowe: Automatic Panoramic Image Stitching using Invariant Features. International Journal of Computer Vision 74(1): 59-73 (2007)
- In class, we have seen some ideas on single view metrology. Implement those ideas on actual camera images assuming some objects such as buildings, poles or doors with known height. You can also implement other ideas, such
those documented in the following papers:
Antonio Criminisi, Ian D. Reid, Andrew Zisserman: Single View Metrology. International Journal of Computer Vision 40(2): 123-148 (2000)
OR
Antonio Criminisi: Single-View Metrology: Algorithms and Applications. DAGM-Symposium 2002: 224-239
- Implement either or both the camera calibration algorithms discussed in class using images taken from a simple camera under fixed settings (including focal length) using a checkerboard pattern printed and pasted on a box.
-
Vanishing points can be used in camera calibration. Implement the algorithms proposed here and/or here (the latter specializes to architectural scenes).
-
The Poisson equation has nice applications in computer vision, such as deriving depth given surface normals (in fact, it literally integrates a set of gradients to obtain an image). Several applications of this have been explored in image editing.
See here for a nice paper on this.
-
The iterated closest point is an algorithm for matching point-sets in 2D or 3D with small motion in between them, but when the correspondence between the points is unknown. Implement the algorithm and work on some interesting applications of the algorithm.
Start from the wikipedia article for an overview.
-
A method for alignment of point-sets when correspondence is not known
-
When you apply for a foreign country visa, you need to submit a photo. The photo has several specifications: it should contain a frontal view of your face, the entire face should be visible and no large head rotations are allowed, the expression should be neutral, the background should be of a particular color (no cluttered backgrounds allowed), the resolution should be acceptable (not too low), there should be no scarves or other accessories occluding parts of the face, the spectacles should not have a large glow on them, and so on. Try to implement a system that will check for as many of these specifications as you can (you need not do all). Let your imagination run wild! You can add other specs here that you deem fit.
You can take a look at the photo requirements for a US visa.
-
Object tracking using mean-shift, see here.
-
A. Levin and Y. Weiss. User Assisted Separation of Reflections from a Single Image Using a Sparsity Prior.
IEEE Trans. Pattern Analysis and Machine Intelligence, Sep 2007
-
For those interested in image/signal processing: Training Methods for Image Noise Level
Estimation on Wavelet Components
-
Image alignment using mutual information - this paper is for medical images, but the technique is applicable for other types of images as well.
You may also take a look at this paper from MIT which also applies the mutual information technique to 3D-2D registration - the equations
in this paper may appear intimidating because they are using Parzen windows instead of histograms for estimating the probabilities and hence the entroopy, but in principle the technique is not very different. Note that mutual information is a quantity that is similar to the joint entropy.
-
Separation of transparent layers using focus (When you take a picture of a scene through a glass window, the glass window acts as a semi-reflecting
surface and hence you get an image that is the summation of the image of the scene outside and the image of the scene inside the room from where the picture was taken. This paper takes two pictures from the same camera viewpoint but with different focal settings and performs the separation of the two layers using a simple method.
It uses the concept of mutual information to estimate the focal settings as well.)