Least Sqaures Estimator Vs Propagator Converter for Orbit Determination

I am trying to understand the difference between a propagator converter and an estimator. When I initially tried OD, I made use of the propagator converter, and got the right solution.

The testing that was being done was to propagate a TLE, and use the propagated states as inputs into the propagator converter to see if I can recover the original TLE. This worked great.

But now that I also require the covariance associated, I am realizing that the Batch Least Squares Estimator is the right way to go about doing things, and this also got me the right solution, but more computationally expensive (about 60 milliseconds per call, Vs 10 milliseconds on average when using the propagator converter).

My question is essentially why two different methods generate the same result, and when do I use each method.
Any help would be greatly appreciated.

Hi @Tomi,

“Converters” are older implementations, they were kept.
JacobianPropagatorConverter should give the same results as batch least-square OD. Maybe the batch least-squares OD should have been plugged in under the hood but it was never done.
FiniteDifferencePropagatorConverter uses finite differences instead of automatic differentiation to get the Jacobian.
I tend to prefer the batch least-square OD to perform conversions since you get more controls on the algorithm, you can setup an observer to follow the convergence, and you get richer outputs like the covariance matrix.

I’m curious to know where the difference comes from !
Maybe we overlooked something in the BLS that is dragging the performance down. Or is it just the fact that the algorithm is more complex (more checks, more branches etc.) ?
It would be a good thing to run a performance comparisons with a profiler.

Cheers,
Maxime

Thanks a lot for the explanation. I figured as much, but wanted to be sure.

1 Like