\input{template}
\input{macros}
\usepackage{epsfig}

\begin{document}
\lecture{10}{$Ax \leq b$ as a convex combination of its extreme points}{Hidayath Ansari}

In this lecture, we complete the proof of a theorem stating that all points in the set $Ax \leq {\bf b}$ can be expressed as a convex combination of its extreme points. We then prove that a linear function on such a set is maximized at an extreme point, and show how that is used to construct the Simplex algorithm.\\

\begin{Thm}
Let $p_1, p_2, p_3, \ldots, p_t$ be the extreme points of the convex set $S = \lbrace x : Ax \leq {\bf b}\rbrace $  Then every point in $S$ can be represented as $\displaystyle\sum_{i=1}^{t}\lambda_i p_i$, where $\displaystyle\sum_{i=1}^{t}\lambda_i = 1$ and $0 \leq \lambda_i \leq 1$
\end{Thm}

\begin{proof} 
Proof is by induction on the dimension. \\
Consider $p \in S$. Join $p_1$ to $p$ and extend to meet $q$ on the boundary. For the point $q$, we must then have 
\begin{align}
A_{1}q & = b_{1} \\ 
A^{''}q & < {\bf b^{''}}
\end{align}
(where $A^{''}$ is the rest of $A$), because $q$ is on the boundary, and  $A_{1}q > b_{1}$ outside the feasible region (having crossed the hyperplane).
From the first equality, we solve for one variable, say $x_n$ and replace it throughout in $A^{''}$.  This allows us to construct a new convex set $S^{'} = \lbrace x : Cx \leq {\bf d}\rbrace $, in \textit{one less dimension}.

By the induction hypothesis, $q$ can be written as a convex combination of extreme points in this object, $S^{'}$. Hence,
\begin{align}
p & = \beta p_{1} + (1-\beta)q\\
  & = \beta p_{1} + (1-\beta)\displaystyle\sum_{i=1}^{t^{'}} \gamma_{i}q_{i} 
\end{align}
This however is in terms of the extreme points $q_1, q_2, q_3, \ldots, q_{t^{'}}$ of $S^{'}$. 
We show that the extreme points of $S^{'}$ are also extreme points of $S$. Suppose they were not. Let $p^{'}$ be an extreme point of $S^{'}$ but not of $S$. Then $\exists~p_{1}^{'}, p_{2}^{'} \in S$ such that $p^{'} = \lambda p_{1}^{'} + (1-\lambda)p_{2}^{'}$. By construction of points in $S^{'}$, 
\begin{align}
 b_1 & = A_1 p^{'}\\
     & = \lambda A_1 p_{1}^{'} + (1-\lambda)A_1 p_{2}^{'}
\end{align}
But since we have $A_1 p_{1}^{'} \leq b_1$ and $A_1 p_{2}^{'} \leq b_1$ (both $p_1$ and $p_2$ are in $S$), we must have the equality holding in both for the above equality (6) to be true.
Therefore $p_{1}^{'}$ and $p_{2}^{'}$  must also be in $S^{'}$. $p^{'}$ can not then be extreme in $S^{'}$, as it is the convex combination of two points in the same set.

For the base case, take the dimension to be $0$. This completes the proof.
\end{proof}
\\
The following theorem will put the last step in place to construct an algorithm for solving LP problems.
\begin{Thm}
A linear function on $S = \lbrace x : Ax \leq {\bf b}\rbrace $ is maximized at an extreme point.
\end{Thm}
\begin{proof}
Let a linear function $f$ attain its maximum at point $p$, where $p = \displaystyle\sum_{i=1}^{t} \lambda_{i}p_{i}$ (This is a valid assumption by the previous theorem). Then $f(p) = $ $\displaystyle\sum_{i=1}^{t} \lambda_{i} f(p_{i})$.
If all of the $f(p_{i})$'s were lesser than $f(p)$, their convex combination cannot sum to $f(p)$. Therefore for at least one $i$, $f(p_i) = f(p)$.
\end{proof}

After having proved this, we have a finite algorithm at our disposal now. An extreme point is an intersection of $n$ linearly independent hyperplanes. We just need to pick all combinations of $n$ rows from $A$ ($m \choose n$ in number), solve for $x_0$ in $A^{'}x_0 = {\textbf{b'}}$ using Gaussian Elimination, {\textbf{verify}} that the solution indeed satisfies all other inequalities, and then calculate $c^{T}x$.

The verification part is important, as the $n$ hyperplanes we choose may end up defining an infeasible point. An example is 2-D is shown in Figure \ref{figure:lecture10afig1b}.

\ffigure{lecture10afig1b}{height=2.5in}{Why we need to verify}{figure:lecture10afig1b}

A rather simple formulation of the algorithm could then be:\\

\begin{program}
	\> Start at an extreme point.\\
	\> While a neighbour of higher cost exists, move to it.
\end{program}
Intuitively, this would work, as by a previous result a local maximum in such a problem is also a global maximum. A more formal description of the Simplex algorithm and proof of its correctness is done in subsequent lectures.\\
Questions raised at this juncture are:
\begin{enumerate}
\item How do we start the process? It is pointless to obtain all extreme points and then pick one from among them.
\item How do we move to a neighbour?
\item Why are we guaranteed that the optimal is attained when we stop? 
\end{enumerate}
\end{document}
