Date |
Content of the Lecture |
Assignments/Readings/Notes |
Lecture 1: 10th Jan (Fri) |
- Course overview: intro to compressed sensing, tomography, dictionary and transform learning, low rank matrix recovery
|
|
Lecture 2: 11th Jan (Mon) |
- Compressed sensing (CS): introduction and motivation
- Review of DFT and DCT, Review of JPEG, representation of a signal/image as a linear combination of basis vectors
- Sparsity of natural images in transform bases
|
|
Lecture 3: 11th Jan (Mon) |
- Candes, Romberg, Tao: puzzling experiment; Basic optimization problem for CS involving the total variation (i.e. sum total gradient magnitude of the image)
- Concept of incoherence between sensing matrix and representation basis
|
|
Lecture 4: 14th Jan (Thurs) |
- Theorem 1 for reconstruction guarantees
- Whittaker-Shannon sampling theorem
- Comparison between Theorem 1 and Shannon's sampling theorem.
|
|
Lecture 5: 18th Jan (Mon) |
- Further comments on theorem 1
- Inutuition behind incoherence
- Concept of restricted isometry property (RIP)
|
|
Lecture 6: 18th Jan (Mon) |
- Theorems 2 and 3 and comments on them
- Motivation for use of L1 norm instead of L2 norm for compressed sensing
- Toy sample experiments in compressed sensing
|
|
Lecture 7: 21st Jan (Thurs) |
- CS for piecewise constant signals: Theorem 4
- Rice single pixel camera: architecture, and difference from conventional camera
- Results on a patch-based Rice single pixel camera
|
|
Lecture 8: 21st Jan (Thurs) |
- Rice single pixel camera in video mode
- Concept of tradeoff between spatial and temporal resolution in image acquisition
- Video snapshot compressive sensing: concept, hardware description, results
|
|
Lecture 9: 25th Jan (Mon) |
- Video snapshot compressive sensing: concept, hardware description, results
- Hyperspectral images: introduction
- CASSI: Hyperspectral compressive sensing
|
|
Lecture 10: 25th Jan (Mon) |
- CASSI: Hyperspectral compressive sensing
- Coded Filter Arrays (CFAs) and color image acquisition, image mosaicing
- Importance of coded aperture
- Compressed sensing for MRI
|
|
Lecture 11: 28th Jan (Thurs) |
- Matching pursuit (MP) and orthogonal matching pursuit (OMP)
|
|
Lecture 12: 28th Jan (Thurs) |
- Iterative Shrinkage and Thresholding Algorithm (ISTA)
|
|
Lecture 13: 1st Feb (Mon) |
- Compressed sensing for Pooled testing of COVID19 samples
- RT-PCR noise model
- Compressed sensing algorithms for pooled testing
- Use of contact tracing and family information
|
|
Lecture 14: 4th Feb (Thurs) |
- Concept of mutual coherence and its relation to the RIC
- Gershgorin's disc theorem
- Sketch of proof of theorem 3, review of various vector inequalities
|
|
Lecture 15: 8th Feb (Mon) |
- Compressed sensing matrix design
- Compressive classification
|
|
Lecture 16: 11th Feb (Thurs) |
Tomography
- Introduction to tomography
- Radon transforms, Fourier slice theorem
- Backprojection and filtered backprojection
|
|
Lecture 17: 15th Feb (Mon) |
- Tomography as a compressed sensing (CS) problem
- Concept of function handles
- Coupled CS reconstruction
|
|
Lecture 18: 15th Feb (Mon) |
- Cryo-electron microscopy: introduction, concept of micrograph/particle
- Tomography under unknown angles (2D images, 1D projections): Helgason Ludwig consistency conditions (HLCC), inherent ambiguity in tomography under unknown angles
- Theoretical results in tomography under unknown angles (stated without proof)
|
|
Lecture 19: 18th Feb (Thurs) |
- Algorithms for tomography under unknown angles: moment-based method, algorithm by Basu and Bresler based on ordering, algorithm based on dimensionality reduction using Laplacian eigenmaps
- Correction for unknown shifts in tomography under unknown angles; usage of order statistics for ordering and dimensionality reduction methods
- Laplacian eigenmaps algorithm in detail
|
|
Lecture 20: 18th Feb (Thurs) |
- Algorithms for tomography under unknown angles: algorithm based on dimensionality reduction using Laplacian eigenmaps
- Laplacian eigenmaps algorithm in detail
- Denoising tomographic projections using spatially varying PCA and Wiener filter
|
|
Lecture 21: 22nd Feb (Mon) |
- Tomography under unknown angles in 3D: concept of common line, algorithms for determining common lines
- Algorithm for finding unknown angles and underlying 3D structure from 2D projections under unknown angles - case of 3 projections only
- Algorithm or finding unknown angles (based on semi-definite programming) and underlying 3D structure from 2D projections under unknown angles - case of N projections
- Cryo-EM complete pipeline, concept of cryo-electron tomography and its difference from single particle cryo-EM
- Handout: orthogonal procrustes algorithm
|
|
Lectures 22 and 23: 4th March (Thurs) |
Dictionary Learning
- Principal components analysis (PCA): basic concept, derivation
- Eigenfaces algorithms and its flavours
- Concept of overcomplete dictionaries: merits/demerits compared to orthonormal bases
|
|
Lecture 24: 11th March (Mon) |
- Relation between PCA and DCT for image patches
- Relation between dictionary learning and K-means
- Dictionary learning: method of OLshausen and Field; Method of optimal directions (MOD); Union of Orthonormal bases with derivation
|
|
Lecture 25: 15th March (Thurs) |
- KSVD dictionary learning: basic algorithm, use of SVD within KSVD
- Applications of KSVD: image compression, denoising, inpainting, video compressive sensing (Hitomi architecture)
|
|
Lecture 26: 18th March (Thurs) |
- Blind compressive sensing: concept, Compressive KSVD algorithm
- Requirement for blind compressed sensing: diversity of measurement matrices
- Non-negative matrix factorization, non-negative sparse coding
|
|
Lecture 27 and 28: 22nd March (Mon) |
- Concept of transform learning
- Transform learning: Fisher's linear discriminant
- Transform learning: classification using mutual information maximization (between class labels and the projections of the original datapoints)
- Compressed sensing using overcomplete dictionaries
|
|
Lecture 29 and 30: 25th March (Thurs) |
- Ubiquitousness of low rank matrices in image processing and machine learning: recommender systems, image patches assembled to form a matrix, distance matrices, applications in structrue from motion, applications in face recognition
- The problem of low rank matrix completion
- Informal statement of the basic theorem, concept of coherence of subspaces, sufficient conditions for successful recovery and pathological cases
- Formal statement of theorem of low rank matrix completion; extension to the case of noisy matrix completion
- Empirical results for low rank matrix completion
- Concept of low rank matrix recovery
- Basic theorems for low rank matrix recovery, concept of RIP of linear maps/operators, relation between low rank matrix recovery and low rank matrix completion
|
|
Lecture 31 and 32: 1st April (Thurs) |
- Introduction to Robust Principal Components Analysis (PCA)
- Motivating examples for RPCA: background subtraction in videos, specularity and shadow removal from face images
- Basic theorem for RPCA; extension to the case when there is noise in observed matrix
- Basic theorem for RPCA with incompletely observed entries
- Singular Value Thresholding Algorithm (SVT) for matrix completionIntroduction to Robust Principal Components Analysis (PCA)
- Motivating examples for RPCA: background subtraction in videos, specularity and shadow removal from face images
- Basic theorem for RPCA; extension to the case when there is noise in observed matrix
- Basic theorem for RPCA with incompletely observed entries
- Singular Value Thresholding Algorithm (SVT) for matrix completion
|
|
Lecture 33: 4th April (Mon) |
- Introduction to Robust Principal Components Analysis (PCA)
- Motivating examples for RPCA: background subtraction in videos, specularity and shadow removal from face images
- Basic theorem for RPCA; extension to the case when there is noise in observed matrix
- Basic theorem for RPCA with incompletely observed entries
- ALM algorithm for RPCA
- Compressive RPCA: Greedy Algorithm by Waters et al, applications in video and hyperspectral CS, and RPCA with missing entries
- Basic Theorem for Compressive RPCA
|
|
Lecture 34: 8th April (Thurs) |
Statistics of Natural Images
- Power law
- Correlation of pixel values in natural images
- Statistics of DCT coefficients of natural images: laplacian models and justification for the same
- Brief introduction to the Haar wavelet transform
- Brief introduction to the Haar wavelet transform
- Revision of basic Bayesian statistics, concept of MAP and MMSE, posterior probabilities with gaussian likelihood and Lapalcian and Gaussian priors
- Compressed sensing based on statistical priors
-
|
|
Lectures 35, 36, 37 |
- Statistical properties of power spectra of natural image categories; application to scene categorization
- Statistics of wavelet coefficients of natural images, use of dependencies between wavelet coefficients as priors for image denoising and in compression
- A semi-automated method for reflection removal using statistics of image gradients: Iteratively reweighted least squares algorithm
- Results of the IRLS algorithm for reflection removal, mixture of Laplacian prior for image gradients
|
|