Perfect observations in a reduced-order 3D-Var/OI analysis
Introduction: 3D-Var/OI analysis
The linarised 3D-Var expression is an OI-type analysis. The analyis,
, is a correction of the background field,
:
where all vectors have length
. The number of observations is
, with:
The analysis increment,
, is a linear function of the innovation,
, a vector of length
:
where
is the gain matrix, of order
and
Here
is the observation vector (of length
) and
is the background estimate of the observations, obtained by applying the observation operator
to the background,
.
The gain matrix has two equivalent expressions:
Where
is the
background error covariance matrix,
is the
observation error covariance matrix, and
, of order
, is the Jacobian matrix (evaluated in the background
) of the function
(which may be non-linear).
That the two expressions are equivalent is easily seen. In fact the relation:
is equivalent to:
Now, in order to assume that observations are perfect, only the first of the two expressions can be used. By setting
, it becomes:
Remark that
has order
with
, then it can be invertible (though this is not guaranteed: for example it is not invertible when
has two equal rows).
In the perspective of the “perfect observation” assumption, in order to look more deeply into the two expressions, it is useful to define a couple of scalars:
where
indicates the trace of the matrix. Then, define the following “tilde” matrices so that:
The two gain matrix expressions become:
Consider now the observations to be perfect
with respect to the background field. This is obtained by assuming that:
By neglecting
, the first expression readily becomes, as above:
In the second expression, though:
where
has order
, but the matrix
has order
. So it must be rank-deficient, i. e. non-invertible, because
. The second expression cannot be used for perfect observations.
In the general expression, the analysis is obtained as a linear combination of
vectors of length
, the
columns of the
matrix
.
A reduced-order analysis is obtained as a combination of
vectors, with
, collected in the
columns of the
matrix
. This is done when the matrix
can be approximated as:
Where the column of
are supposed to have been normalised, so that the magnitude of the background error is carried by the
matrix
, the “background error covariance matrix” in the subspace spanned by the columns of
. Again, two equivalent expressions are obtained for the gain matrix:
Note that the matrix
(the
-dimensional basis) appears in both expressions on the left. That the two expressions are the same can be seen in a way similar to what was shown above. In fact, the relation:
which again is an identity.
And use the scalar
, defined as:
so that:
The two expressions become:
In order to assume that observations are perfect with respect to the background field, assume:
Since
to neglect
in the
second expression is not a problem:
In the first expression, though, the matrix
has order
, then it is rank-deficient and not invertible, because
has order
, with
.
In conclusions, in the reduced-order analysis, it is the second expression that has to be used when the case of “perfect” observations is considered.
-
the matrix has a single column, the vector (of length );
-
the matrix is reduced to a scalar, ;
-
the matrix becomes a vector of length , with components . is then a row vector;
-
Let be the components of the innovation vector (of length );
-
Assume , which has order , to be diagonal, with diagonal elements .
The analysis increment, obtained applying the gain matrix to the innovation, is then a multiple of the vector
:
Its expression for “perfect” observations is obtained by defining:
Then
, where
also is diagonal, with elements
. The analysis increment is:
Now neglect
to obtain the analysis increment in the perfect observations case:

Francesco Uboldi 2014,2015,2016,2017