This page will provide information on what is covered in each lecture along with pdf of the hand written material from each class. This page will be updated as the class progresses.

The course notes for this course offered in spring 2015 are available here. The course notes for this course offered in autumn 2013 are available here.

The course notes are available from course page. You can check out this link for a growing list of resource material (slides/videos/code/lecture notes, etc) for optimisation in general, and convex optimisation in particular. Please let me know if you come across some useful resource material that I could add to this list.
You can check out this link for a growing list of papers that we plan to cover in the assignments/projects or sometimes through discussions in the class. Please start looking at these papers for your seminar and/or projects.

Date Summary

-

Practice problem set: Please find here a growing list of practice problems, including some previous year's question papers and their solutions. In particular, do look at this set. You can also refer to this wiki for definitions of several concepts relevant to this course, connections between those concepts through theorems and proofs of those theorems.

It is very important that you attempt the homework problems that were posted at the end of (almost) each lecture.

You can consider solving the following problems from Boyd (solutions to few of them has been discussed in class or moodle): (A) 2.1 to 2.37 and 2.39 (B) 3.1 to 3.35 (question involving quasi-convexity are optional) and 3.47 to 3.56. (C) 4.1 to 4.6 and 4.8 to 4.20 and 4.8 to 4.29 and 4.38 to 4.49 and 4.56, 4.61, 4.63. (D) 5.1, 5.2, 5.4 5.5, 5.7, 5.9, 5.10 to 5.13 5.14 to 5.20, 5.21 to 5.23, 5.26 to 5.31 and 5.39, 5.41 to 5.43 (E) 9.1 to 9.11 and 9.23 to 9.32. 9.12 to 9.22 are optional (F) 11.1 to 11.12, 11.15 to 11.18


20.7.2015

  • classnotes
  • You can refer to Linear Algebra Done Right for brushing up/reading up on linear algebra.
  • Explain the optimization problem on slide 8 of the notes.
  • Present a geometric intepretation of the solution to the least squares optimization problem on slide 6 of the notes.

23.7.2015

  • classnotes
  • You can refer to Linear Algebra Done Right for brushing up/reading up on linear algebra.
  • Linear algebra brushup: Understand and justify each step of derivation on page 4 of the notes.
  • Given an affine set S , give a procedure to obtain A and b such that S = { x | Ax = b } taking cues from pages 145 to 181 of these notes.

27.7.2015

  • classnotes
  • You may Refer to Chapter 2 (especially pages 93 to 97) of Topology by Munkres to understand the connection between open sets, closure, closed sets, limit points and neighborhoods for topological spaces. Topology and Groupoids by Ronald Brown is another good reference.
  • Read up the definition of a norm. Prove that normed space is a metric space.
  • Understand the different definitions for interior, boundary, open and closed sets in the case of normed spaces.
  • Read how an inner product space is a normed space.

30.7.2015

  • classnotes
  • Show that the following are vector spaces (assuming scalars come from a set S), and then answer questions that follow for each of them: Set of all matrices on S, set of all polynomials on S, set of all sequences of elements of S. (HINT: You can refer to this book for answers to most questions in this homework.) How would you understand the concepts of independence, span, basis, dimension and null space (chapter 2 of this book),norms, inner product and orthogonality (chapter 6), eigenvalues and eigenvectors (chapter 5)? EXTRA: Now how about set of all random variables and set of all functions.

3.8.2015

  • classnotes
  • Read page 8 and 9 of the notes and understand the derivation of the induced matrix 1-norm.

6.8.2015

  • classnotes (part 1)
  • classnotes (part 2)
  • On pages 2 and 4 of the class notes part 2, we saw an example of a cauchy sequence (in space of rationals) that did not converge (in that space). Presnt an example of a non-convergence cauchy sequence from another space.
  • Read up about complete normed spaces.

9.8.2015

  • classnotes
  • Show that with bounded p-norm, the vector space of sequences is complete and that the basis is countably infinite.
  • Prove the different alternative expressions for the Frobenius norm stated in the class.

13.8.2015


17.8.2015


20.8.2015

  • classnotes part 1 and classnotes part 2
  • References: "Preliminaries on convex analysis and vector optimization" pages 1-17, Boyd and Vandenberghe Section 2.2.5 and 2.6.1, Sections 3.1 and 3.3, Nemirovski: Lecture Notes on Modern Convex Optimization Sections C.1, C.6.3, B.1.4, B.2.6.B,
  • Complete dual interpretation of the positive semi-definite cone on the pen-ultimate page of classnotes part 1. Plot a finite number of half-spaces parametrized by \theta to verify the dual description.
  • Prove the Holder's inequality making use of the convexity hint provided in the class

24.8.2015

  • classnotes
  • References: Boyd and Vandenberghe Section 3.3, Nemirovski: Lecture Notes on Modern Convex Optimization Sections C.1, C.6.3, B.1.4, B.2.6.B,
  • Give a graphical interpretation of the convex conjugate taking cues from the results on pages 4-6 as well as from 7-20 of classnotes which are based on based on pages 231 to 239 of these detailed notes on basics

27.8.2015


31.8.2015

  • classnotes
  • References: Boyd and Vandenberghe Section 3.1.3, 6.5.5, Nemirovski Section C.6.1 and C.6.2.
  • We proved necessary and sufficient first order conditions for (strict/strong) convexity in the class. Attempt the same for second order neceesary and sufficient conditions; that is, in terms of the Hessian which we introduced in a previous class. As a hint, you can use the result for the Hessian proved on pages 18 and 19 of these notes.
  • Complete the homework problem on the dual interpretation of a positive definite cone from the previous lecture

3.9.2015

  • classnotes
  • References: Boyd and Vandenberghe Section 3.1.3, 4.1, 4.2, Nemirovski Section C.3 and C.5

14.9.2015

  • classnotes
  • Discussion of solutions to midsem questions

21.9.2015

  • classnotes part 1 and classnotes part 2
  • Connect the convexity preserving operations on convex functions with the convexity preserving operations on convex sets (via epigraphs)
  • References: Boyd and Vandenberghe Sections 2.3 and 3.2

24.9.2015

  • classnotes part 1 and classnotes part 2
  • We saw that several operations on convex functions preserve convexity. What about preservation of closedness (of the epigraph) of the functions under those operations?
  • References: Boyd and Vandenberghe Sections 3.2 and 9.1 and 9.2
  • Several graphical illustrations can be found in the matlab code available here.
  • An illustration of Random walk based descent, showed today in the class, is based on the matlab code available here, which is adopted from matlab code accompannying the book Applied Optimization with MATLAB Programming

28.9.2015


1.10.2015

  • classnotes part 1 and classnotes part 2
  • Present examples of sequences that are R-convergent but not Q-convergent
  • Understand and present the relation between Lipschitz continuity, continuity, uniform continuity, differentiability, continuous differentiability and so on.
  • Understand the purpose behind the joint Wolfe conditions for line search in equations (4.90) and (4.91) of the basic class notes
  • References: Boyd and Vandenberghe Sections 9.3.1 and Nocedal and Wright Section A.2 (subsection "Rates of Convergence"). Extra advanced reading: Nemirovski Section 5.4.2

5.10.2015


8.10.2015

  • classnotes
  • Prove the quadratic convergence rate bounds for Newton stated on slide 5 of classnotes part using hints from page 6. The detailed solution is presented in Section 9.5.3 of Boyd and Vandenberghe ,
  • Prove the convergence rate bounds for Gradient Descent stated on slide 8 of classnotes part in the case in which only the function is known to be Lipschitz continuous. This proof is exactly the same proof as that for the subgradient descent algorithm with Lipschitz continuity and convexity assumptions on the function. See this link for the proof.
  • Understand and present the best convergence rate bound for gradient (steepest) descent with constant step-size from these notes
  • References: Boyd and Vandenberghe Sections 9.3 and Sections 1.2.2, 1.2.3 and 1.2.4 of Introductory Lectures on Convex Programming by Nesterov.

12.10.2015


15.10.2015

  • classnotes
  • We described the lasso problem in class. Explain how to solve it using subgradient analysis.
  • References: Section 2 of >these notes by Bertsekas for weak duality and min-max inequality and Section 3 of the same notes for conditions for strong duality and analysis of saddle points. For subgradient methods, refer to Section 1, 2 and 3 of these notes on advanced optimization by Boyd.

19.10.2015

  • classnotes part 1 and classnotes part 2
  • How would you work out the KKT conditions if the function f were not differentiable?
  • References: Boyd and Vandenberghe Sections 5.1, 5.2. Geometric motivation for lagrange multipliers and KKT necessary conditions in Section 4.4.1 and 4.4.2 (page 282 onwards) of my lecture notes

26.10.2015

  • classnotes part 1, classnotes part 2 and extra reading
  • References: Boyd and Vandenberghe Sections 5.1, 5.2. Nemirovski: Lecture Notes on Modern Convex Optimization Sections 1.4.5 (esp theorem 1.4.2). Advanced: To appreciate theorem of alternatives that is used to prove strong duality in linear and conic duality see Boyd and Vandenberghe Section 5.8 (especially page 263) and Nemirovski Sections 1.4.4 (extra reading for geometric interpretation of conic dual).

29.10.2015

  • classnotes part 1 and classnotes part 2
  • References: Geometric interpretation of duality in optimisation problems: Boyd and Vandenberghe Section 5.3, Page 10 to 13 of class notes, Section 4.4.3 (page 292 onwards) of my lecture notes. You can read more about the Lasso and its dual here.

31.10.2015


2.11.2015


5.11.2015

  • classnotes
  • References: For Interior point method, refer to Boyd and Vandenberghe Sections 11.1-11.4