Hello all,

I’ve read a lot of the tutorials and forum posts related to this subject, but unfortunately I am still having trouble figuring out how to include a measurement/observation noise covariance to a sequential filter (for instance, KalmanEstimator (OREKIT 12.0 API)). Can anyone provide a reference or brief example of how to provide measurement/observation noise to the estimator? Thank you,

Erin

Hi @erinf,

Measurement/observation noise covariance is added directly to the measurements themselves via the `double[]`

array `theoreticalStandardDeviation`

in ObservedMeasurement class.

These parameters are the \sigma_i of your measurements components, the covariance matrix will then P_{ii} = \sigma_i² and will be diagonal.

There are only two types of measurements in Orekit for which you can define a non-diagonal covariance:

- PV, for Position/Velocity, i.e. GNSS measurements
- Position, same as above but without the velocity

In these classes, you will find constructors that take a covariance matrix in input.

Note that if you’d like to have these covariances for angular measurements it could be added to Orekit.

I think it wasn’t done so far because it is “uncommon” to have angular measurements given with a 2x2 covariance matrix, generally you only get the diagonal \sigma_i.

Then in Orekit, these matrices are used in KalmanEstimatorUtil.decorate method. You’ll see there is a special treatment for the two measurements mentioned above.

In the method, we use the matrix of correlation coefficients and not the physical covariance matrix becauseeverything is normalized to avoid numerical errors.

Finally the Hipparchus object `MeasurementDecorator`

created by `KalmanEstimatorUtil.decorate`

is used in Hipparchus AbstractKalmanFilter to compute the innovation covariance matrix:

S=H.P_{pred}.H^T+R

Where:

- H is the measurement matrix
- P_{pred} the predicted state covariance
- R is the measurement noise/covariance matrix you’re looking for

Afterwards the innovation covariance matrix S is used to compute the Kalman gain and the estimation of the state and state covariance.

Thank you very much for these details! They are helpful. I think a better way to ask my question would be:

I believe that I usually provide values for matrix Q (process noise covariance) and matrix R (measurement/observation noise covariance) to a filter. I believe I am correctly providing Q as follows with doubles a, b, c, d, e, and f:

processNoiseCovMatrix = org.hipparchus.linear.DiagonalMatrix([a, b, c, d, e, f]);

provider = org.orekit.estimation.sequential.ConstantProcessNoise(processNoiseCovMatrix);

myEstimatorBuilder.addPropagationConfiguration(myPropagatorBuilder, provider);

Hopefully the above is the correct process for providing Q (process noise covariance) to the filter, but please let me know if I’m doing something wrong.

I would also like to provide a value for matrix R for the filter to use when it computes the innovation covariance matrix. Would it be correct for me to use the decorate method to do this? I don’t think I quite understand how to provide R via the decorate method.

Thank you!

You are very welcome!

You don’t really need to use the `decorate`

method yourself, I was just explaining what is under the hood of the Kalman filter in Orekit.

The matrix R is built automatically using the theoretical standard deviation (STD or \sigma) of each of the measurements involved in the considered Kalman step.

Examples:

- A
`Range`

measurement at a given`date`

is built like:

```
range = new Range(station, isTwoWay, date, range_value, sigma_range, weight, satellite);
```

Here the STD of the measurement is given via the parameter `sigma_range`

(\sigma_r), which is a `double[]`

array of size 1 (since a range is a one-dimension measurement).

Then, in the Kalman filter, the matrix R = [\sigma_r²] will be automatically built to compute the innovation covariance matrix.

- A ‘PV’ measurement at a given
`date`

:

```
pv = new PV(date, position, velocity, covarianceMatrix, weight, satellite);
```

Here `covarianceMatrix`

R_{pv} is a matrix (a `double[][]`

in Java) of size 6x6 since a PV is a 6-dimensions measurement.

With: R_{pv} = \begin{bmatrix}
\sigma_{x}² & \sigma_{xy} & \sigma_{xz} & \sigma_{xv_x} & \sigma_{xv_y} & \sigma_{xv_z}\\
\sigma_{xy} & \sigma_{y²} & &...& & \sigma_{yv_z}\\
\vdots & & \ddots & & & \vdots\\
\\
\vdots & & & & \ddots & \vdots\\
\sigma_{xv_z} & \ldots & & & \ldots & \sigma_{v_z}²
\end{bmatrix}

It will be used as-is (normalized though) in the computation of the innovation covariance matrix.

Note that there are other constructors for `PV`

, I’ll let you check out the code to see how they work.

But basically, these constructors are simplifications that all end up building the covariance matrix R_{pv} in one way or another.

Does that answer your question?

Maxime

I think I’m starting to understand. So if I want the *actual* observations to have a different noise covariance from the noise covariance that the estimator *thinks* the observations have (R), could I use a measurement builder as follows?

actualNoiseSource = CorrelatedRandomVectorGenerator (actualMean, actualCovariance, small, generator);

raDecBuilder = AngularRaDecBuilder (actualNoiseSource, station, referenceFrame, theoreticalSigma, baseWeight, satellite);

Then the filter would use the value theoreticalSigma to compute the innovation covariance, but the actual measurements would have noise based on the values actualMean and actualCovariance. Is this correct?