Covariance Propagation

I am currently looking to propagate the covariance from the final state output of a kalman filter solution. After running the filter via KalmanFilter.processMeasurements(<list of meas>) I get a propagator, which is then set to master mode. I would like to simultaneously propagate the state and covariance.

I am able to get the covariance and state transition matrix (STM) from the final state through the KalmanEstimation.getPhysicalEstimatedCovarianceMatrix() and KalmanEstimation.getPhysicalStateTransitionMatrix() methods, and have seen that propagated covariance P(t) = STM(t0, t) * Covar(t0) * STM(t0, t)^T + Q(t) where Q is the process-noise matrix.

The Q matrix will be different for each timestep as t evolves, but is the same true for the STM matrix? When I get that output of the STM it is just an array of doubles, so is it static in that equation?


Hi aerow610,

Yes, the STM contains the partial derivatives of current state (@t) with respect to initial state (@t0), so it evolves in time.

It is the “final” STM of the Kalman, when it has reached the last measurement, so it is not a function.
If you want to propagate from there with covariance you’ll have to:

  • Set up the propagator returned by the Kalman @t0 so that it computes partial derivatives (see KalmanModel#updateReferenceTrajectories;
  • At each step, compute the STM(t, t0) like in KalmanModel#getErrorStateTransitionMatrix (note that you do not need to normalize the matrix like it is done at the end of this method);
  • Use the equation you wrote in your original post to get the covariance matrix P(t).

Tell me if you need more info.

The way that I’ve done this in the past is to create a “bad” measurement, defined at the time you want to predict to, that always gets rejected as an outlier. You need to construct an outlier detector and need to guarantee that the measurement will always be rejected.

It’s obviously more of a hack than Maxime’s suggestion, but I found that simpler than reconstructing the covariance propagation myself.

That’s a nice hack Mark :wink:
Thanks for the tip I never thought about doing this.

@MaximeJ thanks for pointing me towards those methods - I think I was able to get it to work. Python code for it is below.

@markrutten I also tried your method, and though it took longer to run than just propagating the covariance, it looked to yield more realistic results. Specifically the fact that covariance was largest in the In-Track direction with small components in the Radial and Cross-Track. I think the reason for this discrepancy is due to the equation used to propagate covariance. Specifically, the addition of the Process-Noise matrix tends to have an increasing effect as the difference between t and t0 increase.

I am using the same process noise matrix as described in Kalman Filter Covariance Propagation in RIC. Using the coefficients described in that paper (1.4e-9 radial, 2.4e-11 intrack, 5e-9 crosstrack) yield a Process-Noise matrix that has the largest growth in the radial direction which is not realistic. These coefficients can of course be tweaked to match Mark’s solution results, however it seems artificial and more like fitting the results to the data as opposed to the other way around.

I found a newer paper on covariance realism which uses similar equations, but also has a process-noise transition matrix that is added into the equation - is there a way to get that process-noise transition matrix out of Orekit? Paper is here

Here is the code I used for Maxime’s method:

# 'filt' is the KalmanEstimator and 'meas' an ArrayList of ObservedMeasurement
final_state_prop = filt.processMeasurements(meas)

# Setup the final propagator with Partial Derivatives
final_state_propagator = final_state_prop[0]
partial_derivatives = PartialDerivativesEquations('KalmanProp_PDE', final_state_propagator)
final_kalman_state = final_state_propagator.getInitialState()
final_state_pde = partial_derivatives.setInitialJacobians(final_kalman_state)

# Get Jacobians Mapper and reduce Final Covariance to a 6x6
mapper = [partial_derivatives.getMapper()]
# 'cov_out' is a list of covariance matrices for each step during the KalmanEstimation
final_kalman_covar = cov_out[-1]
final_state_covar = MatrixUtils.createRealIdentityMatrix(6)
    for i in range(6):
        for j in range(6):
            final_state_covar.setEntry(i, j, final_kalman_covar.getEntry(i, j))
        final_state_covar = [final_state_covar]

# The MasterMode propagator subclass
class FixedStepCovarHandler(PythonOrekitFixedStepHandler):
    def init(self, state, *args):
        self.states_dict = {}
        # 'filt_incl_covar' is a config Boolean specifying if covariance propagation is going to be used
        if filt_incl_covar:
            self.cov_dict = {}
            self.mapper = mapper[0]
            self.init_date =
            self.init_covar = final_state_covar[0]

    def handleStep(self, state, is_last):
        epoch =
        pvc = state.pVCoordinates
        self.states_dict[epoch] = [pvc.position.x, pvc.position.y, pvc.position.z, pvc.velocity.x, pvc.velocity.y, pvc.velocity.z]

        # 'filt_incl_covar' is a config Boolean specifying if covariance propagation is going to be used
        if filt_incl_covar:
            stm = get_stm(state, self.mapper)
            t_step =
            prop_covar = stm.multiply(self.init_covar.multiply(stm.transpose())).add(
                xform_vnc_mat_to_j2k(state, get_proc_noise_mat(t_step)).getSubMatrix(0, 5, 0, 5))
            # Outputs J2000 covariance lower-triangular for covariance in OEM ephemeris
            j2k_covar = [[], [], [], [], [], []]
            k = 1
            for i in range(6):
                for j in range(k):
                    j2k_covar[i].append(prop_covar.getEntry(i, j))
                k += 1
            self.cov_dict[epoch] = j2k_covar

        if is_last:

# Function to get the STM
def get_stm(state_in, mapper):
    stm = MatrixUtils.createRealIdentityMatrix(6)
    dYdY0 = JArray('object')(6)
    for i in range(6):
        dYdY0[i] = JArray('double')([0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
    mapper.getStateJacobian(state_in, dYdY0)
    for i in range(6):
        jOrb = 0
        for j in range(6):
            line = dYdY0[i]
            line2 = JArray_double.cast_(line)
            stm.setEntry(i, jOrb, line2[j])
            jOrb += 1
    return stm

# Function to transform VNC to J2000
def xform_vnc_mat_to_j2k(curr_state, cov_mat_vnc):
    num_params = cov_mat_vnc.rowDimension
    jacobian_double_array = JArray_double2D(num_params, num_params)
    LOFType.VNC.transformFromInertial(curr_state.getDate(), curr_state.getPVCoordinates()).getInverse().getJacobian(
        CartesianDerivativesFilter.USE_PV, jacobian_double_array)
    jacobian = Array2DRowRealMatrix(jacobian_double_array)
    cov_mat_j2k_java = jacobian.multiply(cov_mat_vnc.multiply(jacobian.transpose()))
    return cov_mat_j2k_java

# Process noise function
def get_proc_noise_mat(t):
    q = Array2DRowRealMatrix(6, 6)
    for j in range(3):
        q.setEntry(j, j, (vnc_acc_sgma[j]**2 * t**4 / 3))
        q.setEntry(j, j+3, (vnc_acc_sgma[j]**2 * t**3 / 2))
    for k in range(3, 6):
        q.setEntry(k, k, (vnc_acc_sgma[k-3]**2 * t**2))
        q.setEntry(k, k-3, (vnc_acc_sgma[k-3]**2 * t**3 / 2))
    return RealMatrix.cast_(q)

Hi ! I’m joining to this thead because I would like to propagate an initial covariance matrix through an ephemeris without any measurements (I only have a SpacecraftState array and initial covariance matrix, or, an initial spacecraftState + and initial covariance matrix and his related NumericalPropagator).

For each spacecraftState computed, i used the following part of java code

  double[][] jacobian = new double[6][6];
  s.getOrbit().getJacobianWrtParameters(angle, jacobian);
  RealMatrix stmMatrix = MatrixUtils.createRealMatrix(jacobian);
  RealMatrix realCovT0Matrix = MatrixUtils.createRealMatrix(covT0.getMatrix());
  RealMatrix covMatrix =stmMatrix.transpose().preMultiply(realCovT0Matrix.preMultiply(stmMatrix));

But, as stmMatrix is the identity, the covMatrix is constant and equals covT0 whereas I expected to have an evolutive covariance matrix.

Am I wrong ?

Hi @cjobic

In order to propagate a covariance, you have to use PartialDerviativeEquations class in order to calculate the state transition matrix at each steps.

In this topic you will find an example on how to do it with a numerical orbit propagator. If you have any question after reading it, do not hesitate to ask them.

Best regards,

Hi @bcazabonne .
Thanks a lot for your answer ! I’m gonna try this.


The use of process noise in Kalman filtering demands hands-on experimentation. The amount you need to use depends strongly on the actual errors encountered when your data meets your model, which will differ every time you try to apply it to a new situation. The values shown in any particular example, such as the papers you link, probably need to be adjusted by orders of magnitude to adapt them well to your application. Sweep each of your process noise coefficients from, say, 10^-14 to 10^-4, and see where the filter gives the best results, which in situations with large modeling errors can be quite close to where it diverges entirely. You need to tune the added process noise to find a balance between the filter deciding to ignore the state or deciding to ignore the measurements. There have been many attempts to formalize this approach, such as Akhlaghi, Zhou, and Huang (2017), but when beginning a project, I find a little brute force trial and error is very instructive.