Doubts on covariance propagation with stepHandler

I have some doubts regarding covariance propagation.
First of all I’d like to ask you some theoretical notions, just to make sure I am fully understanding the logics behind what implemented in this regard.
Following the Vallado “Fundamentals of Astrodynamics and Applications”, I understood that the state transition matrix and the error state transtion matrix are two different things. Can you confirm me that with the MatricesHarvester.getStateTransitionMatrix what I retrieve is actually the error state transition matrix (which is nothing else than the Jacobian matrix dY/dY0)?

Going into the code, I initially implemented the following example:

    public static void main(String[] args) {
        // Initial state
        Vector3D position = new Vector3D(-605000.7922166, -5870000.2295111, 3493000.0531990);
        Vector3D velocity = new Vector3D(-1568.254290, -3702.348910, -6479.483950);
        PVCoordinates cartCoord = new PVCoordinates(position, velocity);

        AbsoluteDate startingEpoch = new AbsoluteDate(2022, 01, 01, 00, 00, 0.0, TimeScalesFactory.getTAI());
        AbsoluteDate finalEpoch = new AbsoluteDate(2022, 01, 01, 01, 00, 0.0, TimeScalesFactory.getTAI());

        CartesianOrbit initOrbit = new CartesianOrbit(cartCoord, FramesFactory.getGCRF(), startingEpoch, Constants.IERS2010_EARTH_MU);

        double[][] cartCov = {{1.000000e+02, 1.000000e-02, 1.000000e-02, 1.000000e-04, 1.000000e-04, 1.000000e-04},
				              {1.000000e-02, 1.000000e+02, 1.000000e-02, 1.000000e-04, 1.000000e-04, 1.000000e-04},
				              {1.000000e-02, 1.000000e-02, 1.000000e+02, 1.000000e-04, 1.000000e-04, 1.000000e-04},
				              {1.000000e-04, 1.000000e-04, 1.000000e-04, 1.000000e-04, 1.000000e-06, 1.000000e-06},
				              {1.000000e-04, 1.000000e-04, 1.000000e-04, 1.000000e-06, 1.000000e-04, 1.000000e-06}, 
				              {1.000000e-04, 1.000000e-04, 1.000000e-04, 1.000000e-06, 1.000000e-06, 1.000000e-04}};
        RealMatrix covariance = new Array2DRowRealMatrix(cartCov);
        // numerical propagator
        double[][] tolerance = NumericalPropagator.tolerances(1, 0.1, initOrbit, OrbitType.CARTESIAN);
        ODEIntegrator odeIntegrator = new DormandPrince853Integrator(0.01, 10, tolerance[0], tolerance[1]);
        NumericalPropagator prop = new NumericalPropagator(odeIntegrator);
        prop.setInitialState(new SpacecraftState(initOrbit));
        MatricesHarvester harvester = prop.setupMatricesComputation("stm", null, null);
        List<SpacecraftState> spacecraftStateList = new ArrayList<SpacecraftState>();
        prop.getMultiplexer().add(60, new PropagationStepHandler(spacecraftStateList, harvester, covariance));
    // Step handler
    private static class PropagationStepHandler implements OrekitFixedStepHandler {

        private MatricesHarvester harvester;
        private RealMatrix covariance;

        public PropagationStepHandler(List<SpacecraftState> spacecraftStateList, MatricesHarvester harvester, RealMatrix covariance) {
            this.harvester = harvester;
            this.covariance = covariance;

		public void handleStep(SpacecraftState currentState) {
            RealMatrix dYdY0 = harvester.getStateTransitionMatrix(currentState);
            covariance = dYdY0.multiply(covariance.multiplyTransposed(dYdY0));
            System.out.println(currentState.getDate() + " = " + Math.sqrt(Math.pow(covariance.getEntry(0, 0), 2) + 
                                                                          Math.pow(covariance.getEntry(1, 1), 2) + 
                                                                          Math.pow(covariance.getEntry(2, 2), 2)));

With this implementation, the results are really bad. To get reasonable results, I had to change the handleStep method as follows:

		public void handleStep(SpacecraftState currentState) {
            RealMatrix dYdY0 = harvester.getStateTransitionMatrix(currentState);
            RealMatrix currentCovariance = dYdY0.multiply(initialCovariance.multiplyTransposed(dYdY0));
            System.out.println(currentState.getDate() + " = " + Math.sqrt(Math.pow(currentCovariance.getEntry(0, 0), 2) + 
                                                                          Math.pow(currentCovariance.getEntry(1, 1), 2) + 
                                                                          Math.pow(currentCovariance.getEntry(2, 2), 2)));

Based on this, I have some questions…
In this case, it seems to me that I am always computing the STM based on the initial reference conditions “Y0”. Since the STM comes from a linearization process, isn’t it wrong to always refer to that initial point? That is why in my previous version I thought it was correct to set a different reference state of the harvester at each iteration, and then calculate the new covariance matrix starting from the last one (and not always from the initial covariance matrix).
Referring to src/main/java/org/orekit/tutorials/propagation/ · develop · Orekit / Orekit tutorials · GitLab, also in here I don’t understand why we can compute the propagated covariance matrix after one entire day using a STM which has as reference point the very initial Y0 conditions.

Sorry for the very long message.
Have a nice evening!


Hello @toquika,

Indeed what you get is the Jacobian matrix dY/dY0 that is called error state transition matrix \Phi in Vallado’s book.

Which propagator are you using ? The harvester.setReferenceState(currentState); does not re-initialize the internal matrices, as stated in the Javadoc:

    /** Set up reference state.
     * <p>
     * This method is called whenever the global propagation reference state changes.
     * This corresponds to the start of propagation in batch least squares orbit determination
     * or at prediction step for each measurement in Kalman filtering. Its goal is to allow
     * the harvester to compute some internal data. Analytical models like TLE use it to
     * compute analytical derivatives, semi-analytical models like DSST use it to compute
     * short periodic terms, numerical models do not use it at all.
     * </p>
     * @param reference reference state to set

If you want to re-initialize from current state I think you need to instantiate a new harvester.

State transition matrices are “transitive” so this would do the same, meaning:
With: \Phi(t_i, t_j)=(\frac{\partial{y(t_i)}}{\partial{y(t_j)}})_{6 \times 6} (see for example, Montenbrück and Gill, Satellite Orbits, equation (7.1))
\Phi(t_2, t_0) = \Phi(t_2, t_1).\Phi(t_1, t_0) (see Vallado, p.780, after equation (10-27))
Let P_0, P_1, P_2 be your covariances at times t_0, t_1, t_2
P_1 = \Phi(t_1,t_0).P_0.\Phi^T(t_1,t_0)
P_2 = \Phi(t_2,t_0).P_0.\Phi^T(t_2,t_0) (this is what is done in the tutorial)
P_2 = \Phi(t_2,t_1).\Phi(t_1,t_0).P_0.\Phi^T(t_1,t_0).\Phi^T(t_2,t_1)=\Phi(t_2,t_1).P_1.\Phi^T(t_2,t_1) (this is what you would like to do)
You see here that, if you don’t change your propagation model, starting from initial state at t_0 with P_0 or from an intermediate state at t_1 with P_1 will give you the same covariance P_2 at t_2.

Side note: there is a merge request on the forum written by @bcazabonne that intends to add an additional state for covariance propagation.

Hope this helps you,

Thank you very much @MaximeJ for your kind answer. It solved many of my doubts.

However, I agree with your description of the “transitive” property of the state transtion matrix, but I still have a doubt about one point…
Referring to what you wrote as P_2 = \Phi(t_2,t_0)\;P_0\;\Phi(t_2,t_0)^T, doesn’t this apply for “big” \Delta T only if the state transition matrix was computed analytically assuming a pure keplerian orbital motion? Instead, when an analytical formulation is not possible, isn’t a linearization procedure needed as, for example, in the Markleys’ method? In this case I think that short intervals of propagation are needed. That is why I would not apply the “transitive” property in this case, but rather compute the STM at each iteration and calculate P_i from P_{i-1}.
If you could explain me this last point I would really appreciate it.

Thank you and have a great day!

Hi @toquika

The partial derivatives computation using chain rule (referenced as “transitive” property in the messages above) is always correct, even for tremendously large \Delta t. It is just regular partial derivatives, nothing more. Indeed, a partial derivative can be seen by itself as a linearization process, as is may be used as

dY_i = \sum_j \frac{\partial Y_i}{\partial Y_{0,j}} dY_{0,j}

So when \Delta t is big, it is not the state transition matrix that becomes inaccurate, it is the way one uses it: we don’t consider higher order derivatives. If we want better accuracy (colloquially known as a banana shaped uncertainty region that follows orbit curvature instead of a regular ellipsoid shape tangent to the orbit), then one has to use more complex techniques like Taylor algebra.

In this case, you do not have a single matrix that you multiply by initial uncertainities (hence a linear result), but you have a multidimensional polynomial that also includes quadratics effects and more.
When using Taylor algebra, the first component is exactly the same as the state transition matrix. In a sense, computing a state transition matrix is Taylor algebra truncated to order 1. Going further does not change this order 1 term, it adds more terms.

Orekit does support Taylor algebra. There is a tutorial for this: FieldPropagation, which uses Taylor algebra for Monte-Carlo simulation, with 3 parameters and partial derivatives up to order 3 (i.e each scalar component computed has one value and 19 partial derivatives associated with it). Beware that pushing the number of parameters or order quickly gets out of hand as the number of elements is \choose_n^{n+o} so for example 6 parameters and derivatives up to order 6 leads to {\choose_6^{12}}=924, i.e. one value and 923 partial derivatives so all primitive operations become costly (multiplying two values with all their derivatives implies performing at least 924 * 924 = 853766 components multiplications).

Thank you very much @luc for the reply.

I think I just explained what my doubt is very badly, I am sorry for that. Indeed, my concern was only related to the way the STM itself, as you defined it as the Taylor algebra truncated to order 1, is computed. I was not asking what happens if considering higher order derivatives for the dynamics.

I think I found the answer to my question so I just would like you to confirm if what I understood is correct.
It seems to me (from your StateTransitionMatrixGenerator.class) that the STM dynamics, defined as \dot{\Phi}\:=\:A\Phi, is directly integrated at each integration cycle along with the dynamics. Isn’t it? Then, I can make use of this STM integrated at the time I want to compute the covariance propagation as mentioned before. If so, this answers to my question whether this STM is computed analytically (it does not seem so) or not.

Thank you, have a great day!


The STM is integrated alongside orbits dynamics when it is computed by a numerical propagator, yes.
It is analytical when computed by an analytical propagator.