Instructions for the Project
- Students who come up with original ideas and implement them successfully will automatically get at least an AA in the course. This may be in the form of a correct and
non-trivial extension of a paper. Such work is potentially publishable. Both theoretical ideas (with perhaps more mathematics but less implementation) as well as applications will be rewarded. The instructor will decide what constitutes a "novel, non-trivial extension", and the decision on this is final.
- Projects should be done groupwise - the groups may be different than those for your homeworks.
- All group members should have individual unique contributions to the project (which must be stated clearly in your final report), and yet, be fully and accurately aware of what the other group members did.
- Your project work will be a research paper implementation, or your own idea. In any case, if you wish to have some feedback about the triviality or difficulty of your project, please speak to me
- Your project work may contribute to your thesis, but the work you submit for this course must be done in this semester. It should be a separate deliverable.
- Project due date: week after finals. You will required to submit a report and appear for a viva during which you will demo your project, and answer questions about it.
The final report should clearly but briefly describe the problem statement, a description of the main algorithm(s) you implemented, a description of the datasets on which they were tested, a detailed description of the results followed by a conclusion including an analysis of the good and bad aspects of your implementation or the algorithm.
- You may use MATLAB/C/C++/Java/Python + any packages (OpenCV,ITK, etc) for your project. But merely invoking calls to someones else's software is not substance enough. You should have your own non-trivial coding component. If software for the research paper you implement is already available, you should use it only for comparison sake - you will be expected to implement the paper on your own. Please discuss with me if you need any clarifications for your specific case.
- Due date for deciding the topic: 6th March before 11:55 pm. This submission will be in the form of a 0-mark assignment, but failure to submit on time may result in a 5 percent penalty. You will be required to upload a brief document that lists the following details: the names and ID numbers of all group members, the specific paper(s) you will implement, the datasets on which you will try the algorithms from the paper(s), and the evaluation/validation strategy that you will adopt to test whether your implementations give correct/sensible results.
Please make sure you think of a good plan for the algorithms you will implement.
List of Project Topics
- Image Processing
- Video denoising from Gaussian and impulse noise using low-rank matrix completion. See paper here. Low rank matrix
completion is a very exciting discovery in the field of sparse representations. It turns out that given a low-rank matrix with missing entries, one can reconstruct
the original matrix with high accuracy! This is called low-rank matrix completion. It has applications in image processing and machine learning (Recommender systems). You can try implementation one more application of the low rank matrix completion technique as well.
- Edge detection and background subtraction in low-light images and video: using the Skellam distribution
- Sparse orthonormal transforms for image compression - this builds upon the PCA technique we studied in CS 663 and which we will revise this semester. This project is for the mathematically inclined!
- Denoising and deblurring of images under Poisson-Gaussian noise: "A Convex Approach for Image Restoration with Exact Poisson-Gaussian Likelihood", by Chouzenoux at al, SIAM Journal on Imaging Sciences, 2015.
- Poisson plug and play: Poisson Inverse Problems by the Plug-and-Play scheme
- Using machine learning for distinguishing between photorealistic and actual photographic images (or live and re-broadcast images)
- Some forensics: duplicated regions in natural images
- Tomography
- sLLE: Spherical locally linear embedding with applications to tomography
- Denoising tomographic projections prior to reconstruction:Sparsity based denoising for tomography.
Follow-up journal article on the same topic.
- Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets
- Compressed Sensing
-
We have seen the relative merits and demerits of mutual coherence and the RIC in class. The following papers present two measures that are intermediate: the measures are computable efficiently and yet better than coherence (though not as tight as RIC):
- Tang and Nehorai, Computable Performance Bounds on Sparse Recovery, IEEE Transactions on Signal Processing,
- Gongguo Tang, Arye Nehorai, Performance Analysis of Sparse Recovery Based on Constrained Minimal Singular Values. IEEE Trans. Signal Processing
- Snapshot Compressed Sensing: Performance Bounds and Algorithms
- Compressed sensing matrix by optimizing on the mutual coherence: ON OPTIMIZATION OF THE MEASUREMENT MATRIX FOR COMPRESSIVE
SENSING, and possibly also this paper. You may try to tweak the technique to design
CS matrices (for example) of the Hitomi architecture.
- Inferring mismatch in image representations:
- here. (In other words,
you have seen sparse signal representations in the DCT basis. The frequencies of the cosine bases were aligned with a Cartesian grid. What if the signal was a sparse linear combination
of cosine bases with frequencies that are slightly different from grid frequencies? Can you still model the signal well with a sparsity constraint? The paper proposes an alternating minimization algorithm for this.)
- A. Fannjiang and H Tseng, Compressive Radar with off-grid targets, Inverse Problems,29,(2013)
- Inferring representation basis directly from compressive measurements:
- A variant of the famous KSVD algorithm: Compressive KSVD. KSVD is a method which takes a bag of image patches as input,
and returns a dictionary matrix, such that sparse linear combinations of the columns of this dictionary matrix are able to reconstruct the original patches with high accuracy. It also
turns out that the columns of this matrix when reshaped to form a 2D array, resemble edge filters, if the learning is done properly. We will learn this method in class. This paper is
about inferring such a dictionary directly when the original patches are not available, but instead only their compressive measurements are available.
- Compressive sensing and PCA - a fascinating paper
- Faster implementation of orthogonal matching pursuit: here and here.
- How do you pick the appropriate number of measurements in compressed sensing, since the exact signal sparsity is typically unknown? How do you choose the regularization parameter is ISTA? The answer to both these questions lies in the concept of cross-validation.
Cross-validation in compressed sensing is explored in the following papers:
- Classification, detection, source separation directly from compressive measurements (without reconstruction):
- Davenport et al, "Signal Processing With Compressive Measurements", IEEE Journal of Selected Topics in Signal Processing
- Davenport et al, "The smashed filter for compressive classification and target recognition" (see interesting experiments in "Compressive image acquisition and
classification via secant projections")
- See the issue of affine invariance in https://arxiv.org/pdf/1501.04367.pdf (Reconstruction-free action inference from
compressive imagers)
- Gaussian mixture models for CS:
- Bahmani et al, "Greedy sparsity constrained optimization", Journal of Machine Learning Research 2013 (this is an extension of a greedy algorithm called CoSamp
(similar to OMP) but for non-linear regression problems.
- Blind compressed sensing: Aghagolzadeh et al, "Joint estimation of dictionary and image from compressive samples", IEEE Transactions on Computational Imaging, 2017
- Compressed sensing when the sensing matrix is not accurately known: Yang, Zhang and Xie, "Robustly Stable Signal Recovery in Compressed
Sensing With Structured Matrix Perturbation", IEEE TSP 2012
- A different compressed sensing technique: Enhancing Sparsity by Reweighted 1 Minimization
- Finding needles in compressed haystacks. This paper is about classification using a support vector machine in the compressed domain directly.