Adjoint, transpose and scalar product
Suppose that a scalar product is defined in the tangent space. If
is a symmetric, definite positive matrix, it defines a scalar product as:
where
and
are generic (tangent) vectors (at time 0) and
is a row vector, the trasnpose of the column vector
. Remark that if the state vector components (then the tangent vector components) have physical dimensions, the components of
must have inverse square dimensions make the sum of squares possible (the scalar product is then dimensionless).
At time 0 and at time
the scalar product may be different. So, a different matrix
is used:
Here
and
are generic tangent vector at time
.
Let
be a linear operator, like the tangent linear model, applied to tangent vectors at time 0 and giving tangent vectors at time
. By definition its
adjoint operator is
such that:
By using the
matrices, this is written as:
This is the same as:
Since this relation is valid for generic vectors
and
, it really is a relation between matrices:
Now take the transpose:
The expression of the adjoint is obtained:
From this expression one sees that:
-
the adjoint intrinsicly depends on the scalar products at initial and final time;
-
the adjoint is related to the transpose;
-
the adjoint coincides with the transpose only when both s coincide with the identity matrix: (in this case both scalar products are Euclidean, or L2); remark that this is possible only if all the state variables have the same physical dimensions.
Remark that the tangent linear operator is applied to tangent vectors at time
and gives tangent vectors at time
. Tangent vectors are state variations, or differentials, and are approximated by finite state differences, so they are indicated with
:
The transpose is applied to derivatives at time
and gives derivatives at time
, in this sense it goes backward in time:
where
is a generic function of the state at time
.
The adjoint is defined on tangent vectors, not on derivatives (such as the tangent linear model), and goes backward in time (such as the transpose). A transformation between tangent vectors and “derivatives”, which at least accounts for physical dimensions, is then provided by the
s in the above expression relating adjoint and transpose.
When (since?) what is really needed it the transpose (to compute derivatives with respect to initial conditions), instead of a scalar products one may define a
duality form. Tangent vectors and derivatives belong to spaces that are dual to each other and, at time
:
These expressions are the same, both coincide with the first variation of
. In the duality form, the transpose behaves like the adjoint does in a scalar product:
That is to say:
The duality form (which here appears as a simple product of one row by one column) does not depend on the definition of scalar products (the
s) and it does not depend on time.

Francesco Uboldi 2014,2015,2016,2017