Prev Next Index-> contents reference index search external Up-> pycppad ad_function reverse pycppad-> install get_started.py example ad_variable ad_function two_levels.py runge_kutta_4 whats_new license ad_function-> independent adfun abort_recording forward reverse jacobian hessian optimize reverse-> reverse_1.py reverse_2.py Headings-> Syntax Purpose x_k X(t, u) W(t, u) f p w dw First Order Second Order Example

$\newcommand{\B}[1]{{\bf #1}} \newcommand{\R}[1]{{\rm #1}}$
Reverse Mode: Derivative in One Range Direction

Syntax
dw = f.forward(p, w)

Purpose
Reverse mode computes the derivative of the forward more Taylor coefficients with respect to the domain variable $x$.

x_k
For $k = 0 , \ldots , p$, we use $x^{(k)}$ to denote the value of x_k in the most recent call to       f.forward(k, x_k)  We use $F : \B{R}^n \rightarrow \B{R}^m$ to denote the function corresponding to the adfun object f .

X(t, u)
We define the function $X : \B{R} \times \B{R}^n \rightarrow \B{R}^n$ by $$X(t, u) = u + x^{(0)} + x^{(1)} * t + \cdots + x^{(p-1)} * t^{p-1}$$ Note that for $k = 0 , \ldots , p - 1$, $$x^{(k)} = \frac{1}{k !} \frac{\partial^k}{\partial t^k} X(0, 0)$$

W(t, u)
The function $W : \B{R} \times \B{R}^n \rightarrow \B{R}$ is defined by $$W(t, u) = w_0 * F_0 [ X(t, u) ] + \cdots + w_{m-1} * F_{m-1} [ X(t, u) ]$$ We define the function $W_k : \B{R}^n \rightarrow \B{R}$ by $$W_k ( u ) = \frac{1}{k !} \frac{\partial^k}{\partial t^k} W(0, u)$$ It follows that $$W(t, u ) = W_0 ( u ) + W_1 ( u ) * t + \cdots + W_{p-1} (u) * t^{p-1} + o( t^{p-1} )$$ where $o( t^{p-1} ) / t^{p-1} \rightarrow 0$ as $t \rightarrow 0$.

f
The object f must be an adfun object. We use level for the AD ad level of this object.

p
The argument p is a non-negative int. It specifies the order of the Taylor coefficient $W_{p-1} ( u )$ that is differentiated. Note that $W_{p-1} (u)$ corresponds a derivative of order $p-1$ of $F(x)$, so the derivative of $W_{p-1} (u)$ corresponds to a derivative of order $p$ of $F(x)$.

w
The argument w is a numpy.array with one dimension (i.e., a vector) with length equal to the range size m for the function f . It specifies the weighting vector $w$ used in the definition of $W(t, u)$. If the AD level for f is zero, all the elements of w must be either int or instances of float. If the AD level for f is one, all the elements of w must be a_float objects.

dw
The return value v is a numpy.array with one dimension (i.e., a vector) with length equal to the domain size n for the function f . It is set to the derivative $$\begin{array}{rcl} dw & = & W_{p-1}^{(1)} ( 0 ) \\ & = & \partial_u \frac{1}{(p-1) !} \frac{\partial^{p-1}}{\partial t^{p-1}} W(0, 0) \end{array}$$ If the AD level for f is zero, all the elements of dw will be instances of float. If the AD level for f is one, all the elements of dw will be a_float objects.

First Order
In the case where $p = 1$, we have $$\begin{array}{rcl} dw & = & \partial_u \frac{1}{0 !} \frac{\partial^0}{\partial t^0} W(0, 0) \\ & = & \partial_u W(0, 0) \\ & = & \partial_u \left[ w_0 * F_0 ( u + x^{(0)} ) + \cdots + w_{m-1} F_{m-1} ( u + x^{(0)} ) \right]_{u = 0} \\ & = & w_0 * F_0^{(1)} ( x^{(0)} ) + \cdots + w_{m-1} * F_{m-1}^{(1)} ( x^{(0)} ) \end{array}$$

Second Order
In the case where $p = 2$, we have $$\begin{array}{rcl} dw & = & \partial_u \frac{1}{1 !} \frac{\partial^1}{\partial t^1} W (0, 0) \\ & = & \partial_u \left[ w_0 * F_0^{(1)} ( u + x^{(0)} ) * x^{(1)} + \cdots + w_{m-1} * F_{m-1}^{(1)} ( u + x^{(0)} ) * x^{(1)} \right]_{u = 0} \\ & = & w_0 * ( x^{(1)} )^\R{T} * F_0^{(2)} ( x^{(0)} ) + \cdots + w_{m-1} * ( x^{(1)} )^\R{T} * F_{m-1}^{(2)} ( x^{(0)} ) \end{array}$$

Example
 reverse_1.py Reverse Order One: Example and Test reverse_2.py Reverse Order Two: Example and Test