I am trying out the new(ish) implementation of the UKF in hipparchus. My first step with a new filter is to compare a simple linear case with the EKF. The UKF and EKF should be numerically identical in that case. In my simple test the hipparchus implementations match with zero process noise, but do not match with non-zero process noise.

I think there’s an issue with the implementation, where the sigma-points of the predicted measurements do not account for the process noise. The process noise is added to the predicted state, but that is only used in the predicted ProcessEstimate, not in the filter update step?

It’s possible that I’m misunderstanding what needs to be in my getEvolution method. I can force the UKF to give me the correct results, but getEvolution becomes very cumbersome in that case … and it feels like I’m doing a lot of the work that the filter implementation should.

I’ve attached my test code so you can see how I approached the problem.

I’ve looked at this more closely and I’m confident that there is a problem with the UKF implementation in Hipparchus. The implementation is “additive”, not the “augmented” filter described in the reference (Wan, van der Merwe), which might have introduced the error. The book by Sarkka is a more accessible introduction to filtering, which makes this more explicit. See algorithm 5.14 (pg 87) vs algorithm 5.15 (pg 88).

I’d be happy to make the code changes to fix this, but I can’t see how it would fit into the current UnscentedProcess interface. Is it OK to submit code that would change the interface?

Yes, it is OK. The next Hipparchus version will be a major version (3.0), and other incompatible changes have been introduced in the development branch for it.

Thank you for your analysis. I just have a comment.

Is it a problem or two different implementations of the UKF algorithm?
Hipparchus follows algorithm 5.14 which is not a wrong implementation (I didn’t see in the book that the additive version is a wrong one). In my opinion, the augmented version is just another version of the filter. I think that like an enhancement and not a bug of the current Hipparchus algorithm. Therefore, maybe it should be useful to have the choice between both algorithms. What do you think?

Apologies for the confusion @bcazabonne. It is a problem with the implementation. I’m not suggesting we need the augmented form of the UKF.

Referring to algorithm 5.14, the Hipparchus implementation is missing Step 3 of “prediction”, which calculates the predicted mean and covariance. The measurement sigma-points are then based on this predicted mean and covariance, which doesn’t happen in the Hipparchus implementation (unless you do a lot of extra work in getEvolution as I showed in the first post of this thread).

I am currently working with UKF and am interested in this post.

I also agree with @markrutten about the current UKF in Orkeit/Hipparchus. It does not implement equations in 5.87. In the current implementation in Orekit, the system outputs are generated from the sigma points calculated as per 5.84. The linked book recalculates the sigma points of the prediction step using the prediction error covariance matrix to evaluate the output equation.

I am not certain what the accepted standard for UKF is. I have seen both implementations when I was referring to learning materials. Would appreciate some clarifications.

Hi @niluj … it’s definitely not about an accepted standard. There is only one correct way.

That refence you posted is interesting. It’s fine to calculate the predicted measurement from the points without process noise in the additive case, but makes it confusing. Can you show the part of the algorithm where the innovations covariance is calculated?

Thank you @markrutten for the clarifications, your comment is right. @niluj comment is also right. Hipparchus doesn’t apply Eq 5.87, but we saw both implementations in the literature.

When we started implementing the UKF in Hipparchus, we saw both algorithms: (1) performing the UT only at the beginning of the step (current implementation) and (2) performing the UT at the beginning of the step and just before the update using the predicted state. We decided to follow (1) because it was easy to implement for a first version and for an internship .

@markrutten your remark is important because we also wanted to implement the version (2), as you propose. However, as we implemented the filter during an internship, time issues didn’t also us to implement the version (2). We would be very happy to have your contribution about the version (2). I just have a question, do you think that having to choice between (1) and (2) is possible? Using a boolean or something else. The default value of this boolean could be in consistency with (2).

I agree @Not_A_Baysean_Fan! But I think that the Sarkka book is correct and the book that @niluj refers to is wrong.

I’m not sure how to make a convincing argument, but I can start with some maths. I’m going to try and show the innovations covariance is wrong with @bcazabonne’s option (1). This relies on us agreeing that the KF and the UKF give the same results for a linear problem (which they must). Using the notation from above

Comparing (16) and (19) we can see that the process noise is missing from (16). The two will agree if there’s zero process noise, but this way of implementing the UKF is wrong if there’s non-zero process noise.

I think we disagree. Not about which algorithm is right or wrong, but about how to say things.

My concern is not about which algorithm is right or wrong. On the contrary, I find Algorithm (2) much more interesting than (1). My concern is to have in Hipparchus as many possibilities as can be found in the literature.

For instance, [1] and [2] present both implementations. None of these references accuse an algorithm of being wrong. They just present both. Therefore, I find important, for a library that has to offer as many possibilities as possible to its users, to have both.

Again, I prefer Algorithm (2) too! I’m just asking about two possibilities instead of just one in the code

As you say, that first reference states “In the general UKF formulation, the transformed state samples may then be used directly in measurement processing or formed into a sample mean and covariance, where the new covariance is resampled prior to measurement processing. Inclusion of process noise as described in Equation 24 dictates that our UKF implementation use the latter strategy of re-sampling an updated state-error covariance.”

Your second reference is one of the most cited UKF papers, but I’m confident that there’s a typo in their formulation (both the standard and the square-root) as I showed above. That’s really frustrating!

I’m reluctant to implement something that I know is wrong?

I have changed the code (to option 2) ready for a pull request, but need to do some more work to understand the tests (the radar test isn’t matching perfectly). I’ll give option 1/2 some more thought …

The reference below is a more comprehensive paper by Wan and van der Merwe. In the additive form of the UKF (Table 7.3, page 233) they are more careful about distinguishing between the sigma points used for state prediction and those used to calculate the measurement sigma points (starred vs non-starred symbols). They have a non-standard way of augmenting the sigma point set, but in the algorithm notes say “alternatively, redraw a new set of sigma points that incorporate the additive process noise”. They don’t explicitly say that the algorithms in the square-root paper have a typo, but these more thorough descriptions are clearly different in that respect.

The “radar” test references the filterpy python package. The latest release of filterpy (which was the end of 2018) contains the same problem with the UKF! They’ve fixed it in the source code on github in this change, but again, that’s really frustrating!

The fact that the second paper by Wan and van der Merwe also presents both versions of the algorithm confirms my wish that both versions be available in Hipparchus.

Haha. Sorry @bcazabonne I’m not doing a very good job of explaining this.

There is only one UKF algorithm described in the Wan and van der Merwe chapter that I attached. It was published the same year as their square-root paper (that you attached), but the book chapter contains a more thorough description of the UKF. Their description of the algorithm in the book chapter is more comprehensive and, importantly, it is different to the square-root paper (i.e. doesn’t contain the mistakes).

I was trying to use that as an argument for the fact that there is only one UKF, but unfortunately sometimes even the experts make mistakes when describing it.

Sorry for the delay… I tried to contact Dr. Wan to have additional information about the two algorithms. Unfortunately, I didn’t had any answer.

The paper presents two algorithms: one without process noise addition, which performs only one unscented transform, and another one with process noise addition, which performs two unscented transform.

Currently, Hipparchus’ UKF is a mix between both (i.e., process noise addition but only one unscented transform). Having a mixed version is not implementation. But having both is an interesting one.