Discussing addition of a collision package to Orekit

Hello everyone,

I’m opening this thread after the merge request #290 that i have opened recently to expose the following additions that i have been working on :

Collision package

The main addition would be the new collision package which contains the CollisionDefinition class as well as multiple methods to compute the probability of collision between two collision object.

Probability of collision

The following probability of collision computing methods would be added :

  1. Foster1992 (recommended method)
  2. Alfriend1999 (maximum probability of collision method)
  3. AlfriendMax1999 (maximum probability of collision method)
  4. Chan2008
  5. Alfano2005
  6. Serra2015 (recommended method)

In addition, the CollisionDefinition class completely defines the collision between two given collision objects. Its main outputs are :

  • Mahalanobis distance.
  • Diagonalized combined covariance in the collision plane.
  • The projected position of the secondary collision object onto the rotated collision plane (primary collision object at the origin).

FrameTools

Another addition would be the FrameTools class in the frames package in order to convert to other frame from/to CCSDS Frame facade or Orekit Frame.

ExtremumApproachDetector

I would also like to add the ExtremumApproachDetector, in the propagation.events package, which detects farthest and closest approaches with another given PVCoordinates provider.

IntegratorBuilder (Independant from the collision package)

Finally, this addition is optional as it is completly independant from the collision package. It adds the possibility to create an ODEIntegratorBuilder from an AbsolutPVCoordinates by using the already existing NumericalPropagator.tolerances(double dP, AbsolutePVCoordinates absPva) method.

Suggested process

After discussing these additions with @pascal.parraud and @bcazabonne, i’m suggesting the following process:

  • Open independant issues for IntegratorBuilder, FrameTools, ExtremumApproachDetector and the collision package.
  • Open merge requests in the following order :
    1. IntegratorBuilder (whenever as it is independant from the rest).
    2. ExtremumApproachDetector (In whatever order as long as it is added before the collision package).
    3. FrameTools (In whatever order as long as it is added before the collision package).
    4. Collision package.

Hi Vincent,

I think the collision package will be a great addition. Thanks for your contribution!

The proposed process sounds good to me. I did look through the MR and the code appears to be well written and similar to the existing style so I don’t expect that it would take too long to merge it.

Regards,
Evan

Hi Evan,

Thank you for your review !

If so, i’ll start opening the planned issues and make merge requests for them. I’ll wait for the ExtremumApproachDetector and the FrameTools to be added before opening a merge request for the collision package.

Best regards,
Vincent

Hey there,

Happy to see that our method is a recommended one :wink: Objectively, it’s probably the best “analytical” (or let’s say based on series expansion) approach. In my opinion, the best “numerical” one (or quadrature based) is Patera’s 2005, with a careful implementation, so it should definitely make the cut for the proposed package. I’m assuming they would all come with a Field version (sorry I don’t have time right now to dig into the branch)?

As far as the architecture is concerned, I think it should be made clear that these are all methods for collision probability in the so called short term encounter model where the objects are assumed to have a rectilinear relative motion with independent Gaussianly distributed position vectors at time of closest approach (note that velocities then are considered deterministic). There are other models for probabilities and also there are other types of metric, like the Mahalanobis distance already mentioned but also ones not based on probabilistic distributions like the outer possibilities measures. What I mean is that there should be interfaces and abstract classes to reflect the arborescence of metrics, for future new development.

That kind of brings my next point. For me, these methods are so low level that they should be in Hipparchus, not Orekit. Intrinsically they have nothing to do with orbital mechanics, they’re purely mathematical. To be more precise the calculation can be interpreted (as least for the one that is not concerned about the maximum value) as the cumulative density function of a random variable that is the sum of squares of two arbitrary Gaussian variables. In other words, it is the generalization of the non centered chi square law with two degrees of freedom. They just happen to have an application on collision risk, but if you think about it the short term encounter model has nothing orbital about it.

Finally, +1 for the new event, this should definitely be native to Orekit, without the need to define it manually.

Best,
Romain.

Hi Romain,

First of all, thank you for your detailed answer ! I was indeed hesitating about the Patera’s 2005 one as there was already so many methods and i was (and still am) far from being knowledgeable in this field. This will be added for sure :+1:.

I also agree regarding the architecture. I don’t think it would take too much work to make it easily improvable in the future. I’m thinking of using a broader definition of the current interface ProbabilityOfCollisionMethod with a single method calculateProbabilityOfCollision using a CollisionDefinition interface as an argument. From there we could define multiple kind of probability of collision computing methods for short/long term encounter with their specific implementing CollisionDefinition classes.

Finally concerning your last point, it is true that objectively the core of the majority of these methods are solutions of the same initial mathematical problem. This will require a bit of work and will delay the addition of the collision package into Orekit but it is the right thing to do. Also, there is no field version of these methods for the moment but that can be added too.

Best regards,
Vincent

Hi Vincent,

about all the methods for collision probabilities, there is a bunch of them for sure. By the way, the only “official” ones for CDMs are there (including those beyond the short-term encounter model):

You will see that ours from 2015 is not featured, partly because I think this list is biased towards US-based authors. For the first version of Orekit’s “collision package”, if you’re hesitating I would suggest playing it small and only implement a couple of them (why not one analytical and one numerical?).

As far as the architecture is concerned, as pointed out in my first post, I would really go generic and create an interface for collision risk metrics, not just collision probabilities. Actually, the Mahalanobis distance at TCA is not a probability, however it is based on probabilistic standard-deviations (see the nuance). By the way, there are other ways to represent errors than probabilistic distributions, especially for epistemic uncertainty. Moreover, note that not all metrics necessarily take into account uncertainty: for instance, the distance of closest approach (a.k.a. miss distance) itself is a metric, and is purely deterministic in the sense that is is only based on nominal orbital predictions.

Finally, in addition to the Field version of all the development, the code could benefit well from constructors from Orekit’s CDMs themselves (but not exclusively, it should be possible to perform computations without the CCSDS machinery).

Best,
Romain.

Hi Romain,

I have also found this link and thought it was a shame that your method wasn’t listed…

Regarding the implementation and now that i have already developed these methods, i think that i will add them all in the future merge request and fix them if needed.

My bad for my previous answer, i was also thinking of the metrics but didn’t mention it. After thinking it through, i added a CollisionMetric interface that is currently implemented by ProbabilityOfCollision. all these metrics are inside their own metrics package. In addition, i also added a shortencounter package which defines its own ShortEncounterDefinition (previously called CollisionDefinition) and subpackage probability (with all the methods listed in my initial post). This way, it will be easy to add other models and metrics if needed. How do you feel about this ?

UPDATE: After giving it some thoughts, i still agree with the concept of having a generalized interface for all the metrics but i fail to see the practical use in the code. For example, someone who uses a method to compute a probability will not expect a generic CollisionRiskMetric but specifically a ProbabilityOfCollision. That’s why i think i would expect different packages for different models (short/long encounter etc) with their own methods, abstracts and interfaces classes rather than a global and unified interface/abstract class which would use a CollisionRiskMetric that wouldn’t mean much anymore.

However i would keep the metrics package as a ProbabilityOfCollision is a same container, whether it is a probability for short/long encounter… We can expect the same kind of behaviour for other, to be implemented, metrics.

I don’t know if i was clear so feel free to ask !
END UPDATE

I would also like to mention that, in a first instance, i will not add the core of the computation in Hipparcus in order to save some time. I will leave that part for later and open a thread dedicated to this.

Adding a constructor which uses the Orekit Cdm is now on my TODO list :+1:. Don’t worry about the CCSDS machinery thing as the already existing constructor doesn’t need it at all.

Thank you for your insight !

Best,
Vincent

Hi Vincent,

It’s true that in your merge there won’t be many more metrics than the short term encounter ones (NB: in ShortEncounterDefinition you’re missing ‘Term’ I don’t know if it’s voluntary) so it will look a bit unnecessary. However in my opinion it’s better to make things modular and anticipate future developments. You don’t want to give names to methods that do not make sense in a more general context of decision making and mitigation design. For example I think that there shouldn’t be a public method called computeProbability in ProbabilityOfCollision, but simply an evaluate or compute. Image someone is creating an alert system for high risk conjunctions, if one day they want to switch to metrics different to these probabilities, they’ll have to make low level modifications to their code just because of a naming issue.
Anyhow that’s me talking without having actually looked at your code so I’ll try to find some time to analyze it.

About Hipparchus, I don’t know what @luc thinks. There’s already a bunch of probabilistic stuff there so it would really make sense to me, but this means waiting for the next release I guess.

Best,
Romain.

Hi Romain,

I think i may not have made myself clear and that led to your understandable point of view. In this case, the ProbabilityOfCollision is a simple container and doesn’t have any methods to compute a probability of collision. It is itself the output of a ShortTermEncounterProbabilityMethod interface which is implemented by the methods mentioned in my initial post.

Now, you could think that the issue stays the same as someone who would want to compute something other than a probability would have to make low level modification to ShortTermEncounterProbabilityMethod but it wouldn’t be the case in the new architecture.

I have made a small schematic as the architecture in the closed merge request is now obsolete :

ArchitectureSchematic(4)

In this new architecture, each model has its own package and can define subpackages according to what specific methods they can define (only example is the probability package of ShortTermEncounter model package but you get the point). In addition and thanks to your insight, i added a specific package for metrics in which there is only ProbabilityOfCollision for now.

Let’s say that a user wants to implement it’s own kind of metric and methods associated to it, he/she would either create a subpackage to an existing model as he/she would use the same starting hypothesis or create a completely different model package. Meanwhile, he,she would also add its new metric in the metrics package.

I hope this new architecture meets your expectations and i would like to hear your thoughts about it.

Best regards,
Vincent

Hi Vincent,

ok, this architecture looks good. It is indeed important to distinguish between the model e.g. short-term encounter and the actual computation method e.g. bivariate quadrature in polar coordinates like with Foster’s historical approach. Unfortunately models and methods tend to be melted together in the literature, and the SANA registry does not help with that.

On another note, I keep forgetting to point out a couple of github links:

  • NASA shares some code (in Matlab) on collision risk here.
  • A Python version of our 2015’s method for 2D PoC is available here.

Best,
Romain.

Hi Romain,

thank you for your review and your links !

I’ll keep you updated if i make other significant changes.

Best regards,
Vincent

Hi there,

I was thinking. An important yet a bit obscure part of the short term counter model is the time interval of interest. In theory it needs to be small enough that the relative motion is indeed nearly rectilinear. However in practise for ease of computation it is extended out to infinity so that the integral becomes bivariate instead of trivariate, the argument being that the added contribution is negligible. This should be at least documented in the javadoc, but maybe also reflected via an attribute time interval? Actually several authors have proposed formulas to evaluate the critical length of the rectilinear validity. Coppola’s one is relatively simple and used by NASA CARA so it could be a good addition to Orekit.

Cheers,
Romain.

Hey Romain,

I will enhance the javadoc regarding the consequences of the nearly rectilinear encounter hypothesis as you suggest and look into Coppola’s formula to see how we could implement it.

Ideally, i would like to keep the current implementation of the several methods already there as it is so this may lead to the creation of a new package dedicated to this kind of model. What do you think ?

Best regards,
Vincent

Hi Vincent,

I was more thinking as having Coppola’s formula as a static method (or something of the sort) in the short term encounter model. The rationale being: it is true that if the user has already decided to use this model, they don’t need to estimate the time interval during which relative motion can be considered rectilinear. However if they wish to assess whether or not it is reasonable to apply this model on a particular case, they could compute this quantity and for example compare it to the orbital periods of the two objects, as if if the interval length is not negligible compared to them, it is probably not a good idea to use the 2D collision probability. Does that make sense?

Cheers,
Romain.

Alright, i did not understand it that way at first ! In that case that should be easy to implement and very informative to the user indeed. We could specify in the javadoc an order of magnitude of the ratio between the computed time interval and the shortest orbital period, between both objects, for the assumption to be reasonable ? I would say around 5% but you must have a more precise idea of said ratio ?

Cheers,
Vincent

Hi Vincent

I would actually refrain from giving specific values. Coppolas formula is just one among others. The rectilinear motion is just one assumption among the short term encounter model ones. And there is no consensus in the literature as to when to use it or not.

Cheers,
Romain

Hi Romain,

Alright, in that case i will specify that it is up to the user to determine if the assumption is valid or not given the output of the method and the orbital period of both objects.

Speaking of the method, i am currently implementing it inside the ShortTermEncounterDefinition class as it seems more fit for the job (it already computes everything related to the conjunction). As it also serve as an input for the probability computing methods, I think it will make more sense to the user to get the duration before actually computing the probability.

As always, thank you for your insight !
Vincent

1 Like