CS 344 and CS 386: Artificial Intelligence
(Picture source: http://www.extremetech.com/wp-content/uploads/2012/08/Google500KmilesLexus.jpg.)
This page serves as the primary resource for CS 344 (Artificial Intelligence) and CS 386 (Artificial Intelligence Lab).
Office: Room 220, New CSE Building
Office: 402, New CSE Building, Desk 40
Office: SynerG Lab, KReSIT Building, Desk B-2
Krishna Murthy Bukkapatnam
Office: GRC Lab, Machine 76; Hostel 2, Room 227
Lectures will be held in 103, New CSE Building, Tuesdays 9.00
a.m. – 10.25 a.m. and Thursdays 10.00 a.m. – 11.25 a.m. CS
344 is officially assigned to Slot 4, and will be treated as such for
scheduling examinations and so on.
Lab sessions will be held in Software Lab 2, New CSE Building,
during Slot L2: 2.00 p.m. – 4.55 p.m. Tuesdays.
The instructor's office hours will immediately follow class
lectures and labs. Meetings can also be arranged by appointment.
Artificial Intelligence (AI) surrounds us today: in phones that
respond to voice commands, programs that beat humans at Chess and Go,
robots that assist surgeries, vehicles that drive in urban traffic,
and systems that recommend products to customers on e-commerce
platforms. This course aims to familiarise students with the breadth
of modern AI, to impart an understanding of the dramatic surge of AI
in the last decade, and to foster an appreciation for the distinctive
role that AI can play in shaping the future of our society.
The course will provide a historical perspective of the field of AI
and discuss of its foundations in logic, knowledge representation and
reasoning, search, and learning. A small selection of specialised
topics will also be taken up; these could include, for example, speech
and natural language processing, robotics, crowdsourcing, computer
vision, and multiagent systems. The theory and lab components will
proceed in step to equip students with the knowledge and skills to
design and apply solutions based on AI.
Students interested in gaining more depth are encouraged to follow
this basic course with advanced ones on topics such as machine
learning, information retrieval and data mining, sequential decision
making, robotics, speech and natural language processing, computer
vision, and game theory.
CS 344 and CS 386 are core courses in the CSE undergraduate
programme. They can only be taken by CSE B.Tech. students in their
third (or higher) year.
Grades for CS 344 will be decided based on four class tests with a
combined worth of 45 marks; a mid-semester examination worth 20 marks;
and an end-semester examination worth 35 marks.
Grades for CS 386 will be decided based on 6–8 lab assignments,
each worth 10–20 marks.
Students are expected to adhere to the highest standards of
integrity and academic honesty. Acts such as copying in the
examinations and sharing code for the lab assignments will be dealt
with strictly, in accordance with the
actions for academic malpractice.
Texts and References
Artificial Intelligence: A
Modern Approach, Stuart J. Russell and Peter Norvig, 3rd edition,
Machinery and Intelligence
A. M. Turing, 1950
Multilayer Feedforward Networks are Universal Approximators
Kurt Hornik, Maxwell Stinchcombe, and Halbert White, 1989
Reinforcement Learning: A Survey
Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore, 1996
Intelligence and Life in 2030
Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren
Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece
Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press,
AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016
This page will serve as the primary source of information regarding
CS 344 and CS 386, their schedules, and related announcements. The
Moodle pages for these courses will be used for sharing additional
resources for the lectures and assignments, and also for recording
E-mail is the best means of communicating with the instructor;
students must send e-mail with ``[CS344]'' in the header, with a copy
marked to the TAs.
January 3: Welcome, Introduction to the course.
January 5: AI: Past, present, and future.
Reading: Chapter 1, Russell and Norvig (2010); Slides.
Summary: Definitions of AI; AI effect; Historical underpinnings;
Projects from 50's–80's; Reasons for recent surge;
References: Turing (1950), Stone et al. (2016).
January 10: Agents and environments.
Reading: Chapter 2,
Russell and Norvig (2010).
Summary: Agents; Rationality; Bounds
on resources (computation, memory); Chinese Room Experiment; Types of
agents; Properties of environments.
Reference: Chinese Room Experiment on Wikipedia.
January 12: Perceptron Learning Algorithm.
Reading: Class Note 1.
Summary: Linear separability; Perceptron, Perceptron Learning Algorithm; Proof of convergence.
January 17: Perceptron: variations and practical considerations.
Reading: Section 18.4, Russell and Norvig (2010).
Summary: Maximum margin formulation; Overfitting, training/test/validation data sets; feature design; multiclass classification.
January 19: Neural networks.
Reading: Section 18.7, Russell and Norvig (2010).
Summary: Sigmoid activation function and artificial neuron; Gradient descent; Neural networks as universal function approximators.
Reference: Hornik, Stinchcombe, and White (1989), A visual proof that neural networks can compute any function.
January 24: Class Test 1, Backpropagation.
Summary: Backpropagation algorithm: forward pass and backward pass.
January 31: Overview of supervised learning methods.
Reading: Sections 18.1, 18.2, 18.3, 18.6, 18.8, 18.9, 18.10, Russell and Norvig (2010).
Summary: Derivation of backpropagation update rule; Decision trees; Cross-validation.
February 2: Nearest-neighbour methods; k-means clustering.
Reading: Class Note 2.
Summary: k-NN classification and design issues; k-means clustering problem; k-means clustering algorithm.
February 7: Speech recognition for under-resourced languages using
probabilistic transcriptions. Invited talk
February 9: Supervised learning for shape segmentation. Invited talk
February 14: Class Test 2, k-means clustering.
Summary: Convergence of k-means algorithm.
February 16: Reinforcement learning.
Reading: Sections 1, 3, 4.2, Kaelbling et al. (1996); Slides.
Summary: Sequential decision making; Achieving behaviour by specifying rewards; Markov Decision Problems; Bellman's (Optimality) Equations.
February 23: Mid-semester examination.
February 28: Reinforcement learning.
Summary: Value Iteration; Q-learning; Practical challenges and applications of RL.
March 2: Search.
Reading: Chapter 3, Russell and Norvig (2010).
Summary: Search problem instances; Uninformed search strategies
(Breadth-first Search, Uniform-cost Search, Depth-first Search).
March 7: Search.
Reading: Pieter Abbeel's illustration of A* graph search.
Summary: Informed search; A* algorithm; Heuristics, admissibility and
March 9: Search.
Reading: Chapter 5, Russell and Norvig (2010); Pieter Abbeel's illustration of Alpha-Beta pruning.
Summary: Turn-taking games; Adversarial search; Minimax principle; Alpha-Beta pruning; Handling stochasticity; Popular game-playing programs.
March 14: Probabilistic reasoning.
Reading: Chapter 13, Russell and Norvig (2010).
Summary: Random variables, Joint distributions, Beliefs as probabilities, Application of de Finetti's theorem, Marginalisation, Conditional probabilities, Chain rule, Bayes' rule.
March 16: Probabilistic reasoning.
Reading: Sections 14.1, 14.2, 14.3, Russell and Norvig (2010).
Summary: Independence, Conditional independence, Introduction to Bayes Nets.
March 21: Class Test 3, Probabilistic reasoning.
Reading: Pieter Abbeel's introduction to Bayes Nets.
Summary: Conditional independence in Bayes nets, Necessary and sufficient conditions, illustrations.
March 23: Probabilistic reasoning.
Reading: Section 14.4, Russell and Norvig (2010), Pieter Abbeel's lecture on conditional independence and D-separation.
Summary: D-separation algorithm, Exact inference (by enumeration and by variable elimination).
Reference: Pieter Abbeel's D-separation examples.
March 28: Probabilistic reasoning.
Reading: Class Note 3.
Summary: Bayesian reasoning, Maintaining and updating a belief distribution, Thompson Sampling algorithm for multi-armed bandits.
March 30: Probabilistic reasoning.
Reading: Section 14.5.1, Russell and Norvig (2010).
Summary: Approximate inference in Bayes Nets, Sampling, Rejection Sampling, Likelihood weighting.
April 4: Probabilistic reasoning.
Reading: Section 15.5, Russell and Norvig (2010).
Summary: Discussion on likelihood weighting, Dynamic Bayes Nets, Particle Filtering.
April 6: Probabilistic reasoning, Local search.
Reading: Sections 14.5.2 and 4.1, Russell and Norvig (2010).
Summary: Gibbs sampling, Generate-and-test paradigm, Illustration of local search algorithms.
April 11: Class Test 4, Natural language processing.
Reading: Sections 22.1, 22.2, Russell and Norvig (2010).
Summary: N-gram language models, Application to spam detection.
April 13: Natural language processing.
Reading: Section 22.3, Russell and Norvig (2010).
Summary: Spam detection using compression as a blackbox, Information retrieval, Term Frequency and Inverse Document Frequency, PageRank algorithm.
April 21: End-semester examination.
Lab Assignments and Schedule
Students are expected to complete each lab assignment within the lab
slot (by 5.00 p.m. on the allotted day). Submissions must be uploaded
on Moodle in the format specified.
If a submission is not made by 5.00 p.m., a "carry over"
will be counted against the assignment. Assignments that are carried
over will only be evaluated after the student attends a session with a
TA or the instructor to explain their submission and demonstrate its
working. A special lab session will be announced to evaluate carry
A student may carry over up to two lab assignments without any
penalty. A third carry over will incur a penalty of 2 marks; a fourth
carry over will incur a penalty of 4 marks; subsequent carry overs
will incur a penalty of 6 marks.
Below is the schedule for lab assignments.