I believe I have discovered a synergistic bug when combining the kalman filter code with both a DynamicOutlierFilter and a dynamic process noise generator. The short version is, when a point is rejected by the dynamic outlier filter and the kalman filter moves on to the next measurement, the timing value entered into the process noise generator does not correctly update to account for the fact that the kalman filter skipped a measurement.

I apologize for not writing the update to fix the bug myself, but Iâ€™m afraid I just havenâ€™t found the time and I donâ€™t want to sit on this forever. Thank you for your help.

As Bryan said, itâ€™d be great to have a runnable example.
In the meantime, just to clarify, youâ€™re talking about the extended Kalman filter with a custom inheritor of CovarianceMatrixProvider?

@bcazabonne@Serrof just wanted to say Iâ€™m sorry I havenâ€™t provided an example yet. I was busy working on another aspect of the problem Iâ€™m trying to solve (resolved, thankfully) and I was having trouble figuring out how to provide a good example because Iâ€™m working with the python bindings instead of in the java code base, and because my code is tied pretty tightly into a big project that is running a very large amount of data. But I recently figured out how to generate a measurement file on the python side that the java side can subsequently read, so I will use that to make a working example and post it when I have it.

I considered posting a branch to the git repo, but since this bug does not actually break the running of the code Iâ€™m not sure how much good that would do anyway. But if you take the attached input files and use them to run the KalmanNumericalOrbitDetermination.java tutorial, you will see the bug at line 733 of the KalmanModel.java class where Orekit calculates the process noise for the kalman filter:

if (nbDyn > 0) {
-----> final RealMatrix noiseP = covarianceMatricesProviders.get(k).
getProcessNoiseMatrix(correctedSpacecraftStates[k],
predictedSpacecraftStates[k]);
noiseK.setSubMatrix(noiseP.getData(), 0, 0);
}

The measurement file contains some very obviously bad measurements, but if you place a debug point at this line while running the tutorial, you will observe at the 7th data point that the times associated with calculating the process noise matrix will be those for the 6th and 7th measurements, not the 3rd and 7th measurements as it should be because the Kalman filter skipped values 4-6. Obviously when the process noise is being provided by a ConstantProcessNoise matrix this doesnâ€™t matter, but when it is being provided by a UnivariateProcessNoise matrix you end up with the wrong answer.

@bcazabonne@Serrof I just wanted to check in and make sure you saw this. Is this post enough to explain the problem? Is there anything else you need/want from me?

I believe you have provided us with enough data and information, thank you for this. Iâ€™m a bit busy at the moment and probably wonâ€™t be able to look at it soon. Maybe @markrutten with his knowledge of the Kalman code would solve this in no time?

@Serrof No problem at all, just wanted to make sure you had what you needed and this was on someoneâ€™s priority list. Iâ€™ve been pretty busy too, or Iâ€™d fix it myself. If I do find the time to handle it Iâ€™ll post here and let you know. Thanks for the help.

Note that if your covariance provider only depends on time (as in not on the actual state), a workaround is to wrap it as an attribute of an in-house provider, which would also be linked to a KalmanObserver keeping track of the last non-rejected measurement date.

I will take a closer look at the example over the weekend, but unless Iâ€™m misunderstanding the question, thatâ€™s the behaviour Iâ€™d expect. A Kalman filter works in an iterative predict->update cycle and wouldnâ€™t go backwards in time.

If we have a state at time 0 and measurements at time 1 (rejected) and time 2 (valid) I would expect

predict from time 0 to time 1 (including process noise)

update with measurement at time 1 (reject measurement, so just keep the predicted state)

predict from time 1 to time 2 (including process noise)

update with measurement at time 2

The filter skips the measurement at time 1 by just keeping the predicted state at time 1, it doesnâ€™t go back to time 0.

Thanks for tagging me @Serrof â€¦ I have been neglecting the forum.

@markrutten So yes the filter skips the measurement at time 1 and keeps the predicted state, but does it add the process noise to the predicted covariance at time 1 even if measurement 1 is rejected?

Yes, thatâ€™s what I would expect.
â€śPredictionâ€ť in this case means both mean and covariance of the state. Covariance prediction includes process noise.

@markrutten So I went back and double-checked the code on the Java side and youâ€™re right, that is how the filter behaves currently. But I think an argument could be made that this is not how the filter should behave. Iâ€™m still gaining experience with Kalman filters, so maybe Iâ€™m just making a fool of myself by arguing with an expert. However, for the example given above, if measurements 3-6 do not exist, then the resultant predicted covariance matrix for measurement 7 would be different than if measurements 3-6 do exist and are rejected by the filter. This is doubly true if your process noise generator uses a polynomial or exponential function.

So my core question is: If a measurement is rejected, mathematically is it still valid to update the predicted covariance matrix with the process noise at that point? Or should the kalman filter wait to add process noise to the covariance matrix until it has reached a valid data point?

The Kalman filter does not mathematically work in the way that youâ€™d like.

State-space estimation practitioners (see e.g. Bayesian Filtering and Smoothing) generally suggest choosing a process noise which is consistent across time-steps, so a prediction (no measurement update) from 0->1->2 is the same as going directly from 0->2. Thatâ€™s not always possible or indeed appropriate for orbit determination.

You are already implicitly using the predicted covariance (with process noise) to reject the measurements. The rejection step uses the innovation covariance, which is calculated with the predicted state covariance.

The point that the process noise added to the propagated covariance is used for acceptance/rejection of the measurement is valid, and one I did not consider. Thank you for pointing this out. Iâ€™m in a situation right now where Iâ€™m still learning the nuances of the processes Iâ€™m trying to work out, and nobody else around me knows them at all, so I donâ€™t have a lot of places to go to for advice on how to handle issues. Iâ€™m sorry I called a false bug report, but for what itâ€™s worth this discussion has been helpful to me.