I am a senior Computer Science undergraduate at the Indian Institute of Technology, Bombay. I am highly interested in the fields of Computer Vision, Machine Learning and Deep Learning.
Computer Science and Engineering
This project deals with the problem of finding dense correspondences in images, particularly in wide baseline images with considerable scale and viewpoint changes. The solution to this problem has direct applications in 3D scene reconstruction from RGB images. Extensive research has been done on finding dense correspondences, but most of it has been limited to narrow baseline pairs and rectified images where the disparity is either small or along a single direction. We have developed a 2-step hierarchical approach to predict dense correspondences. We use robust descriptors obtained using a Multi-Resolution network trained in a Siamese fashion, and the descriptors are then used to create a correlation volume, which has pairwise similarities between dense gridpoints, on both the images. Volumetric Convolutions then help smoothen out the matching and include neighbourhood information. As part of the hierarchical approach, first a coarse match is found and then a fine match is estimated around that coarse match. We have good results (Beating the state-of-the-art DeepMatching by around 10%) and we plan to submit to the European Conference on Computer Vision (ECCV), 2018
Cryo-EM or cryo-electron microscopy is a tomographic technique used to determine the structure of a molecule or cell from a large set of images of the molecule (called particle images), obtained with a transmission electron microscope. A large number of such images are obtained from a slide containing hundreds of virus samples, frozen in ice. This provides a large number of projections but at unknown angles. Also, most biological specimens are extremely radio sensitive, so they must be imaged with low-dose techniques, which yield extremely noisy projections. Furthermore, the samples of the same virus that are taken on the thin film are not exactly identical, and might even have ice particles stuck to them therefore there is an added complexity due to particle heterogeneity. We have developed an optimization problem, in an Compressed Sensing based framework, explicitly modelling particle heterogeneity. We have very promising results and we plan to submit to the European Conference on Computer Vision (ECCV), 2018
Interactive learning environments facilitate learning by providing hints to fill gaps in understanding of a concept. Studies suggest that hints are not used optimally by the learners. Knowledge tracing refers to the task of modeling a learner's state of knowledge over time with the goal of predicting the performance of the learner in future assessments. Present Knowledge Tracing models consider taking hints on a question as attempting it incorrectly. We hypothesize that learning due to taking hints is different and thus propose a multi-task learning paradigm based memory-augmented attention based deep learning model to jointly predict propensity of taking a hint and the knowledge tracing task. The model incorporates the effect of past responses as well as hints taken on both the tasks. The model improves the knowledge tracing performance by around 4% and also out-performs the past work on hint prediction by at least 10% points. Submitted in the International Conference on the World Wide Web, 2018. Received a Pre-Placement Offer from Adobe Research, and received the Best Overall Research Project Award
Tomographic Reconstruction is the task of obtaining detailed anatomy using X-Ray produced projections. Recent research in tomographic reconstruction is motivated by the need to efficiently recover detailed anatomy from limited measurements. One of the ways to compensate for the increasingly sparse sets of measurements is to exploit the information from templates, i.e., prior data available in the form of already reconstructed, structurally similar images. Towards this, previous work has exploited using a set of global and patch based dictionary priors. We propose a global prior to improve both the speed and quality of tomographic reconstruction within a Compressive Sensing framework. We choose a set of potential representative 2D images referred to as templates, to build an eigenspace; this is subsequently used to guide the iterative reconstruction of a similar slice from sparse acquisition data. Our experiments across a diverse range of datasets show that reconstruction using an appropriate global prior, apart from being faster, gives a much lower reconstruction error when compared to the state of the art.
The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a “black art” requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. I researched and explored ways to optimally set Hyperparameters of a Learning Model, using Bayesian Optimization. The learning algorithm’s generalization performance is modeled as a sample from a Gaussian process (GP), and Acquisition Functions are used to find the next best bet for the hyperparameters. I gave a talk on how to use these methods in models with large number of Hyperparameters.
Alternate Email: firstname.lastname@example.org