Hi all,
In order to see if Orekit is able to perform orbit determination for an important set of measurements (i.e. millions of measurements), I want to process performance testing on Orekit Orbit Determination features. To do that, I built a benchmark in my own fork of the Orekit tutorials project. It is in od-performance-testing
branch, inside estimation.performance
package.
Benchmark is able to:
-
Use user-defined measurements of generate measurements. The measurement generation has no limits: It generates measurements from a start date to an end date with a time step between measurements, all are user-defined. Therefore, a user can easily generate an important set of measurements (or a small set).
-
Take into consideration perturbation effects in the measurement generation (troposphere, ionosphere, satellite clock offset, station clock offset, satellite antenna offset, measurement biases and random noise).
-
Use all Orekit force models for dynamic modelling.
-
Use all Orbit Determination features of Orekit (i.e. selecting parameters to estimate, statistical results, etc.).
-
Perform Orbit Determination using Orekit’s Kalman Filter of Orekit’s Batch Least Squares Algorithm.
Benchmark is just 3 classes: one for measurement generation, one for orbit determination and one to handle the two previous. The user has just to use the third one.
To initialize the 3 classes of the benchmark, I wrote 3 input files using YAML format. Currently, these three files are adapted to my case of test (i.e. 40 ground station, noisy measurements, etc.). However, be free to change the values and play with the tool An important point is that you can just change the values in the input files, not the name of the keys (This is a YAML drawback …).
Be careful that, if you generate an important set of measurements, measurement generation and orbit determination will take a lot of time.
I hope to get some results about Orekit OD performance next week.
Regards,
Bryan