CS 726: Advanced Machine Learning (Spring 2021)
Lecture Schedule Slot 9, Mon-Thurs 3:30pm to 5:00pm.
Instructor: Sunita Sarawagi
TA: Soumyadeep Thakur, Deepti Mittal, Abhishek, Alex James, Parikshit Bansal.
Email to reach all TAs and Instructors CS726@googlegroups.com.
For questions of general interest to the class, use moodle.
Instructor's office hours check here
Syllabus and week-wise calendar: Click here
Homeworks Click here
This course will concentrate on modeling, generation, and prediction of multiple inter-dependent variables. The topics covered will span probabilistics graphical models (directed and undirected), inference methods like junction trees, belief propagation, and other approximate methods, sampling methods like MCMC, generative models like variational auto-encoders, GANs, Deep Gaussian processes, neural architectures for structured predictions, calibration and out of distribution detection, Bayesian neural networks. When appropriate the techniques will be linked to applications in NLP, speed recognition, graphics, and science.
course is open to CS MTechs, PhD, DD and BTech students provided they
have taken an introductory course in IITB in machine learning and
obtained at least a BC grade in it. Students of other departments
should approach for permission only if they meet the necessary
pre-requisites. Audit mode students are not allowed.
formal introductory ML course like CS 725 or CS 337 or CS 419 is
required. Online ML courses do not qualify as pre-requisites.
The course assumes basic knowledge of probability, statistics, and
linear algebra. Chapters 2 and 3 of
book are a good place to refresh the necessary required
background. Also, the course assumes basic background in machine
learning, for example as covered in Chapter 5 of
book and deep learning, for example, as covered in Chapter 6
of the same book. Further, we will assume that students are
familiar with CNNs, RNNs, and sequence to sequence learning with
Approximate credit structure
- 25% Mid-semester exam
- 35% End semester exam
- 15% Graded homeworks
- 10% Quizzes
- 15% Project
Primary text books
Probabilistic Graphical Models: Principles and Techniques,
by Daphne Koller and Nir Friedman, MIT Press, 2009.
by Ian Goodfellow, Yoshua Bengio and Aaron Courville, MIT Press, 2016.
The course calendar will provide links
to other relevant papers and book chapters for specific topics.
Other relevant text books:
Online courses on deep learning
A Primer on neural networks for natural language processing, by Yaov Goldbeg.
- R. G. Cowell, A. P. Dawid, S. L. Lauritzen and D. J.
Spiegelhalter. "Probabilistic Networks and Expert Systems".
M. I. Jordan (ed). "Learning in Graphical Models". MIT Press. 1998.
Collection of papers. These appear collated here.
J. Pearl. "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference." Morgan Kaufmann. 1988.
- Graphical models by Lauritzen, Oxford science publications
- F. V. Jensen. "Bayesian Networks and Decision Graphs". Springer. 2001.
- Neural Networks and Deep Learning by Michael Nilson