Instructions for the Project
- Students who come up with original ideas and implement them successfully will get bonus points for the project. This may be in the form of a correct and
non-trivial extension of a paper. Such work is potentially publishable. Both theoretical ideas (with perhaps more mathematics but less implementation) as well as applications will be rewarded. The instructor will decide what constitutes a "novel, non-trivial extension", and the decision on this is final.
- Projects should be done groupwise - same group as for your homework.
- All group members should have individual unique contributions to the project (which must be stated clearly in your final report), and yet, be fully and accurately aware of what the other group members did.
- Your project work will be a research paper implementation, or your own idea. In any case, if you wish to have some feedback about the triviality or difficulty of your project, please speak to me
- Your project work may contribute to your thesis, but the work you submit for this course must be done in *this* semester. It should be a separate deliverable.
- Project due date: week after finals. You will required to submit a report and appear for a viva during which you will demo your project, and answer questions about it.
The final report should clearly but briefly describe the problem statement, a description of the main algorithm(s) you implemented, a description of the datasets on which they were tested, a detailed description of the results followed by a conclusion including an analysis of the good and bad aspects of your implementation or the algorithm.
- You may use MATLAB/C/C++/Java/Python + any packages (OpenCV,ITK, etc) for your project. But merely invoking calls to someones else's software is not substance enough. You should have your own non-trivial coding component. If software for the research paper you implement is already available, you should use it only for comparison sake - you will be expected to implement the paper on your own. Please discuss with me if you need any clarifications for your specific case.
- Due date for deciding the topic: 17th March before 11:55 pm. This submission will be in the form of a 0-mark assignment involving submission of a proposal document, but failure to submit on time may result in a 5 percent penalty. You will be required to upload a brief proposal document (1 page) that lists the following details: the names and ID numbers of all group members, the specific paper(s) you will implement, the datasets on which you will try the algorithms from the paper(s), and the evaluation/validation strategy that you will adopt to test whether your implementations give correct/sensible results.
Please make sure you think of a good plan for the algorithms you will implement.
List of Project Topics
- Image Processing
- Sparse orthonormal transforms for image compression - this builds upon the PCA technique we studied in CS 663 and which we will revise this semester. This project is for the mathematically inclined!
- Denoising and deblurring of images under Poisson-Gaussian noise: "A Convex Approach for Image Restoration with Exact Poisson-Gaussian Likelihood", by Chouzenoux at al, SIAM Journal on Imaging Sciences, 2015.
- Poisson plug and play: Poisson Inverse Problems by the Plug-and-Play scheme
- Tomography
- sLLE: Spherical locally linear embedding with applications to tomography
- Denoising tomographic projections prior to reconstruction:Sparsity based denoising for tomography.
Follow-up journal article on the same topic.
- Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets
- Basu and Bresler, Uniqueness of tomography with unknown view angles, IEEE TIP 2000
and/or Basu and Bresler, "Feasibility of tomography under unknown view angles", IEEE TIP 2000
Compressed Sensing
- Compressed sensing using binary matrices of nearly optimal dimensions
. This topic explores
nice properties of binary sensing matrices.
-
We have seen the relative merits and demerits of mutual coherence and the RIC in class. The following papers present two measures that are intermediate: the measures are computable efficiently and yet better than coherence (though not as tight as RIC):
- Tang and Nehorai, Computable Performance Bounds on Sparse Recovery, IEEE Transactions on Signal Processing,
- Gongguo Tang, Arye Nehorai, Performance Analysis of Sparse Recovery Based on Constrained Minimal Singular Values. IEEE Trans. Signal Processing
- Snapshot Compressed Sensing: Performance Bounds and Algorithms
- Compressed sensing matrix by optimizing on the mutual coherence: ON OPTIMIZATION OF THE MEASUREMENT MATRIX FOR COMPRESSIVE
SENSING, and possibly also this paper. You may try to tweak the technique to design
CS matrices (for example) of the Hitomi architecture.
- Inferring mismatch in image representations:
- here. (In other words,
you have seen sparse signal representations in the DCT basis. The frequencies of the cosine bases were aligned with a Cartesian grid. What if the signal was a sparse linear combination
of cosine bases with frequencies that are slightly different from grid frequencies? Can you still model the signal well with a sparsity constraint? The paper proposes an alternating minimization algorithm for this.)
- A. Fannjiang and H Tseng, Compressive Radar with off-grid targets, Inverse Problems,29,(2013)
- Inferring representation basis directly from compressive measurements:
- A variant of the famous KSVD algorithm: Compressive KSVD. KSVD is a method which takes a bag of image patches as input,
and returns a dictionary matrix, such that sparse linear combinations of the columns of this dictionary matrix are able to reconstruct the original patches with high accuracy. It also
turns out that the columns of this matrix when reshaped to form a 2D array, resemble edge filters, if the learning is done properly. We will learn this method in class. This paper is
about inferring such a dictionary directly when the original patches are not available, but instead only their compressive measurements are available.
- Compressive sensing and PCA - a fascinating paper
- Faster implementation of orthogonal matching pursuit: here and here.
- Classification, detection, source separation directly from compressive measurements (without reconstruction):
- Davenport et al, "Signal Processing With Compressive Measurements", IEEE Journal of Selected Topics in Signal Processing
- Davenport et al, "The smashed filter for compressive classification and target recognition" (see interesting experiments in "Compressive image acquisition and
classification via secant projections")
- See the issue of affine invariance in https://arxiv.org/pdf/1501.04367.pdf (Reconstruction-free action inference from
compressive imagers)
- Gaussian mixture models for CS:
- Bahmani et al, "Greedy sparsity constrained optimization", Journal of Machine Learning Research 2013 (this is an extension of a greedy algorithm called CoSamp
(similar to OMP) but for non-linear regression problems.
- Blind compressed sensing: Aghagolzadeh et al, "Joint estimation of dictionary and image from compressive samples", IEEE Transactions on Computational Imaging, 2017
- Compressed sensing when the sensing matrix is not accurately known: Yang, Zhang and Xie, "Robustly Stable Signal Recovery in Compressed
Sensing With Structured Matrix Perturbation", IEEE TSP 2012
- A different compressed sensing technique: Enhancing Sparsity by Reweighted 1 Minimization
- LDPC codes and compressed sensing. This is for people interested in information or coding theory.
- Compressed sensing accounting for prior support information: "Compressed Sensing With Prior Information: Information-Theoretic Limits and Practical Decoders", IEEE Trans. Signal processing, 2013
- Weighted LASSO for Sparse Recovery With Statistical Prior Support Information, IEEE Trans. Signal Processing, 2016
- Estimation of level sets (isocontours, i.e. regions of an image with intensity greater than some gamma) directly from compressive measurements. Paper
- Sparsity estimation from compressive measurements using Gaussian and Cauchy random matrices. Paper
- Sparsity estimation from compressive measurements with sparse binary sensing matrices. Paper
- Convolutional Sparse Support Estimator Network (CSEN): From Energy-Efficient Support Estimation to Learning-Aided Compressive Sensing, IEEE TNNLS 2020
Group Testing
- Nearest neighbor search in high dimensions using group testing and bloom filters (Write your own code!)
- Group testing for multi-label classification: Paper1, Paper2
- Recovery Algorithms for Pooled RT-qPCR Based Covid-19 Screening, IEEE TSP 2022
- Heterogeneity Aware Two-Stage Group Testing, M. Attia, W. Chang and R. Tandon, IEEE Transactions on Signal Processing, Vol. 69: 3977-3990, July 2021.
Neural Networks
- Solving Inverse Problems with Deep Linear Neural Networks: Global Convergence Guarantees for Gradient Descent with Weight Decay