Orekit performances improvement

Dear all,

first of all, I would like to thank you for your amazing work in the Orekit development and in its continuous improvement!

My team is investigating the possibility to use Orekit library for the implementation of trajectory optimization (TO) problems, that we are planning to solve via heuristic optimization.

We were recently conducting some performance tests on some pre-defined orbit propagation scenarios, to understand (have a rough idea) how long it could take the solution of hypothetical TO problems, where orbit propagation tasks may be run up to thousands of times until convergence.

According to preliminary results obtained so far (using openjdk-19), we fear that the time needed to solve typical TO problems of interest might take too long for us (speaking of several hours, up to half a day in worst case scenarios).

Therefore, we were wondering if there are any alternatives to make Orekit code running faster.

Of course, we are aware that there are several options that can be already addressed to have things running faster (besides the correct use/implementation of Orekit code, also the specific propagator and integrator used, their specific settings, the force models set in propagator, etc. - we are also working on this to improve the performances).

However, we were wondering if there is something more “drastic” we could consider to gather a significant improvement of the performances.
We recently came across the GraalVM JVM (GraalVM - Wikipedia) and, according to information available online about it, it “promises” to improve performances of Java code to levels kind of comparable to C++ compiled code.

Therefore the question is: could this be a viable solution? Could it work with Orekit, or there are any caveats that would prevent it to be a practicable solution?

Alternatively, do you have any suggestions you could provide us to improve Orekit (mainly orbit propagator) performances in terms of speed (computational time)?

Many thanks!

1 Like

Hi there,

Your question is quite generic. What kind of propagation fidelity do you want? Low, medium, high? Do you include event detection?

On what version of Orekit have you based your first tests?

Do your heuristics use derivative (in other words, do you need the State Transition Matrix)? On that front for numerical propagation the upcoming version 12.1 is faster than 12.0 which is itself faster than 11.X. I still have a few ideas to go even further, but it wont change the order of magnitude.

I’ve not dug too much into it yet, but I think we have some margin to speed up the DSST propagator, which is the semi analytical model available in Orekit.


Using GraalVM works well in Orekit, I do use it (and I do not do this for performance reasons at all). You need to set up a reflect-config.json and a resource-config.json file properly configured for this to work.

This is however only part of the solution. Java is already able to achieve levels of performance comparable to any other language (C++, fortran, you name it). The problem is not the language, at least not since about Java 6 which was released more than 10 years ago. I remember a presentation I did at several conferences, using linear algebra and QR decomposition as an example. At that time, I was even able to be very slightly (5%) faster than an optimized Fortran code from lapack that used ATLAS as the low level package. In this presentation, I explained that there were a lot of different factors that had at least the same influence and in some cases much more influence than the language or the compiler. The most prominent effect was in fact choosing the proper algorithm and using it correctly. Romain suggests using DSST for example, this is a good advice. Another point I have often seen is people performing time loops by themselves in propagation rather than using step handlers, the slow-down effect of explicit loop can be more than one order of magnitude (I remember one case when Pascal and myself did a study for an agency, years ago, the difference between explicit time loop and step handler was 50-fold).

From my experience, bottlenecks are always counter-intuitive. Even knowing that, I still get caught all the time saying to myself: oh I should optimize this part, it will take too long, only to find at the end I optimized something that was computed in 2 milliseconds and I completely overlooked something that needed 2 minutes to complete. So I would suggest to do some real benchmarking to see where time is really spent, and do this with proper profilers. Once done, look at the result and where the most prominent bottleneck really is. Is it inside a low level Orekit algorithm (frames transforms, gravity field, derivatives, attitude…)? If so is it because the algorithm is inherently slow or because it is called too many time and not in the most efficient way? Is it because of memory contention or threads starvation? Is it because of input/output and disk access? Is it because some computations are done over and over again without caching results? Is it because tuning parameters like accuracy, step sizes or convergence threshold were not chosen appropriately?

We have tuned and improved many algorithms within the library over the years, there are no “drastic” improvements that could be done anymore by just snapping our fingers or saying: oh it’s because it is Java, so just use something different and we will get a tremendous acceleration instantly, job done.


Hi @nix87,

Could you share with us some of these scenarios and the performance you get?
There are some caveats when propagating with lots (say thousands) of impulse or continuous maneuvers.
Maybe we can help you improving these scenarios before you start implementing trajectory optimization.