I have been pretty active in the forum with questions related to the Kalman filter, as I am working on that for a project and in my PhD thesis.
In the beginning of the project I used (which is the one recommended in the forums) the ConstantProcessNoise. However, it is still not so clear to me what this process noise does. For what I have understood, we define the initial uncertainty of the orbit using the initial covariance matrix and then, at each Kalman filtering step, the process noise matrix we define (Q) is added to that initial uncertainty. Is this correct or does it depend on the time step? The latter is what I would have expected, as for a longer time step more noise should be added to the uncertainty.
On the other hand, I am working on the process noise (which is a really hard topic) and it would be of tremendous help to know if the classic methods for process noise are implemented in Orekit:
State Noise Compensation (SNC): Is this what ConstantProcessNoise does?
Covariance Matching (CM)?
Dynamic Model Compensation (DMC): Is there any way to implement this using direct Orekit features? Does UnivariateProcessNoise have anything to do with this? I suspect no, as this constructor allows the creation of a process noise that varies according to UnivariateFunctions defined by the user.
Adaptive versions of the previous ones (ASNC, ADCM)?
Are Gauss-Markov processes implemented in Orekit to be used in the process noise?
Regarding, again, the UnivariateProcessNoise: could I define the UnivariateFunctions with some parameters (e.g., \sigma_1\cdot x + \sigma_2) and estimate/correct those parameters during the Kalman run? For example, using the Innovation matrix to increase/decrease them, or estimate those based on the dynamics and measurements, for example.
I know there are a lot of questions, most of them hard to answer. But any help would be tremendous . Thanks a lot to everyone.
Yes, ConstantProcessNoise just adds a constant value regardless of the size of the time step. Obviously this is wrong, but technically so is pure 2-body Keplerian motion. It’s meant to be a basic class for simpler problems.
Yes, UnivariateProcessNoise allows you to plug in polynomial functions to model your orbital and propagation parameter process noise values. These can be linear or higher-order polynomials that do in fact take the length of time since the last step into account. Tuning the function to behave well is as much art as science however, and how much work you put into it and what sort of values you use depends heavily on what you’re specifically doing and how good the results actually have to be.
It is possible to make your own custom child class of AbstractCovarianceMatrixProvider that can accept additional function inputs and thus modify the process noise function result using those inputs. For example, if you are propagating an orbit including the drag propagation parameter, you might wish to make a custom child class that accepts the orbit altitude as an input, as the amount of uncertainty introduced into orbit propagation based on drag varies wildly as a result of altitude (ask me how I know).
To my knowledge, UnivariateProcessNoise is the most complex process noise input native to Orekit at present. If you do end up writing a custom process noise class that has both greater sophistication and general utility, I’m sure nobody would object to you contributing it to the general repository.
@markrutten I am not the expert here. Did I misrepresent anything? Anything you’d like to add?
For what its worth, pre-dating the OREKIT Kalman Filter and process noise, we have implemented the introduction of process noise into our Unscented Kalman Filter (with OREKIT astrodynamics). We cannot contribute it as the IP is owned by our customer, and is very different from the OREKIT library.
We add process noise with diagonal velocity covariance growing linearly at a set (tunable) rate in each of the TVN directions of choice (we find along velocity is generally sufficient). This linear growth represents random velocity fluctuations. Adding fixed values at random observation times has no basis in physics as far as I know.
We increment the noise into the noise matrix in small enough time steps (fractions of an orbit) such that we can incorporate the non-linear dynamics. We accrue this into the noise matrix over longer periods of time (until the noise matrix is “too big” or a Kalman Update is needed) by propagating the small accumulated noise matrix using the STM across these “short” time steps and adding the new contribution. This allows all the essential off-diagonal terms to develop. Eventually the noise is combined with the state covariance, new sigma points are produced for the state covariance and the noise matrix falls back to zero.
We started off much simpler, but ended up with this model. As well as performing better for Kalman Filter and general propagation than just adding linear noise, this approach is essential for filter smoothing (at least with our algorithm)