Prev Next

@(@ \newcommand{\R}[1]{{\rm #1}} @)@@(@ \newcommand{\B}[1]{{\bf #1}} @)@
L1 Objective with L1 Regularization

[x_out] = l1_linear(max_itrAbdelta)
[x_outinfo] = l1_linear(max_itrAbdelta)

We use @(@ 1_m \in \B{R}^m @)@ to denote the vector will all elements equal to one. Given a vector @(@ w \in \B{R}^m @)@ we use @(@ D(w) \in \B{R}^{m \times m} @)@, to denote the corresponding diagonal matrix.

We are given a matrix @(@ A \in \B{R}^{m \times n} @)@ with rank @(@ n @)@ and a vector @(@ b \in \B{R}^m @)@. The problem we wish to solve is @[@ \begin{array}{lll} \R{minimize} & \| A x - b^\R{T} \|_1 & \R{w.r.t} \; x \in \B{R}^n \end{array} @]@

First Order Conditions
We reformulate this as the following equivalent linear programming problem @[@ \begin{array}{lll} \R{minimize} & 1_m^\R{T} v_+ + 1_m^\R{T} v_- & \R{w.r.t.} ( v_+ , v_- , x ) \in \B{R}_+^{2 m} \times \B{R}_n \\ \R{subject \; to} & A x - b = v_+ - v_- \end{array} @]@ The first order conditions for a minimum of this problem are equivalent to @(@ F = 0 @)@ where @(@ F : \B{R}^{5m+n} \rightarrow \B{R}^{5m+n} @)@ is defined by @[@ F \left( \begin{array}{c} u_+ , u_- , v_+ , v_- , w , x \end{array} \right) = \left( \begin{array}{c} D ( u_+ ) v_+ \\ D ( u_- ) v_- \\ 1_m - u_+ - w \\ 1_m - u_- + w \\ A x - v_+ + v_- - b \\ A^\R{T} w \end{array} \right) @]@

The integer scalar max_itr specifies the maximum number of iterations of the algorithm to execute. It must be greater than or equal to zero. Note that if it is zero, the first row of the info return value will still be computed. This can be useful for deciding what is a good value for the argument delta .

The argument A is an @(@ m \times n @)@ matrix with rank @(@ n @)@ (hence @(@ n \leq m @)@).

The argument b is real column vector of length @(@ m @)@.

The argument delta is a real scalar greater than zero. Convergence after the step at the k-th iteration is defined by info(k+1, 1) <=delta (see First Order Condition below).

The result x_out contains the approximate solution of the optimization problem at the last iteration; i.e., it corresponds to the value in info(end, 1) described below.

If the return value info is not present, the corresponding information is printed on the screen. The result info is a matrix with each row corresponding to an iteration of the algorithm. The row info(k+1, :) corresponds to iteration @(@ k @)@ of the algorithm. If l1_linear has satisfied the convergence condition if and only if
info(end, 1) <= delta

First Order Condition
We use @(@ u_+^k @)@ @(@ u_-^k @)@ @(@ v_+^k @)@ @(@ v_-^k @)@ @(@ w^k @)@ @(@ x^k @)@ to denote the corresponding values at iteration @(@ k @)@ of the algorithm. The value info(k+1, 1) is defined by @[@ info(k+1, 1) = \left. \left| F \left( \begin{array}{c} u_+ , u_- , v_+ , v_- , w , x \end{array} \right) \right| \right/ \sqrt{5 m + n} @]@

Relaxation Factor
The value info(k+1, 2) is the relaxation factor used during iteration @(@ k @)@. As this factor decreases to zero, the solution should correspond to the original problem.

Step Size
The value info(k+1, 3) is the line search step size used during iteration @(@ k @)@. Values near one indicate good convergence. Small values indicate that the relaxation factor is decreasing to quickly (with the exception that info(1, 3) is always zero).

The file linear_ok.m contains an example and test of l1_linear. It returns true if the test passes and false otherwise.

Newton Step
The routine l1_newton_line determines the Newton step used by l1_linear.
Input File: l1_linear.m