I’ve been having a play with the SLR data example in Python and had a few questions that I was hoping you could help me with.
Firstly, I have been having some problems with determining the orbit reliably from the SLR data. I am working with normal point data and have about 50 or so data points in a single pass that I am trying to fit an orbit to. I get good results when I use two passes that occur within ~6 hours of each other and use that data for fitting the orbit. The issue I have is that when using a single pass, in this case spanning about ~10% of an orbit, I fail to get the GaussNewtonOptimizer to converge. This kind of makes sense to me as I don’t really have very much of the orbit covered by my sample data points so not entirely sure that I would expect it to converge (it seems like a lot of different orbits could fit those data) - what do you think? My setup in terms of perturbations is the same as in the SLR Python example.
The error message that I receive from Orekit is:
org.hipparchus.exception.MathIllegalStateException: unable to solve: singular problem
Secondly, related to the first question. If it is the case that one cannot determine the orbit from data spanning ~10% of the orbit then is there an intuitive way for me to use the available CPF data to fit an orbit as opposed to having to do the determination myself numerically? It seems like a smart move to use the orbit fit by the ILRS rather than having to do the determination myself but I’m aware that their output is pos/vel pairing rather than orbital elements.
I think that Orekit should converge even with only ~10% of an orbit.
Yes, it is possible to also use the position contained in the CPF as observation of the orbit determination process. We have a Java tutorial showing how to do that. It is for a Kalman Filter but it shows how to initialize measurements from a CPF.
Even if the estimated orbit is expressed in cartesian elements, you can easily convert them to keplerian elements using Orekit: OrbitType.KEPLERIAN.convertType(cartesianOrbit).
The choice between using station data or CPF data depends on what you want to do at the end. If you want to generate an ephemeris based on observations, you can’t use the CPF because it contains predicted data computed by Analysis Center from station based orbit determination.
If you just want an orbit at a given epoch, you can also use a CPF for interpolating in order to access the orbital elements at different epochs (i.e., not estimating).
Regarding you problem, I have some questions:
Could you give me the list of all estimated parameters (i.e., orbital parameters, propagation parameters and measurement parameters like station biases)?
Could you give me:
The epoch of the initial guess of the orbit determination
The epoch of the first meaurement
The epoch of the last measurement
Could you give me the name of the stations used for the orbit determination case generating the error?
If you initial guess is far from the expected orbit, did you try to use a LevenbergMarquardtOptimizer instead of the GaussNewtonOptimizer?
Thanks for such a speedy and detailed answer, greatly appreciated.
It’s good to hear that Orekit should converge with such a sparse area of data points. Working backward through your questions 1-4:
I tried your suggestion of switching over to the LevenbergMarquardtOptimizer instead of the GaussNewtonOptimizer. I still don’t get convergence but the error message has changed: instead of a singular problem I just exceed the 50 max iterations. I also tried changing the max iterations to 300 but to no avail.
The station is the Graz SLR station, with station ID 78393402.
The first measurement is at 2013-07-31 20:41:28, and the last measurement is 2013-07-31 20:50:35. I use the TLE as an initial guess which was taken on 2013-07-31 at ~19:00:00.
I am not 100% sure what parameters are being optimized for internally but I am providing only the ranges (and the timestamp) from the SLR station. Out of the optimization, I receive an observed value and an estimated value which I subtract from one another to calculate the SLR range residuals. Is this helpful or is there more information that I can provide?
Something else I just remembered. I have not configured range or sigma when calling the Range() function to provide the SLR measurements, and have set them equal to 1.0 each, could this be a cause of potential fitting issues?
What value do you use for the initial step bound factor? The recommended value is 100. However, experience shows that a value of 1.0e6 must be used.
Usually, singular problems occur when a parameter is estimated but there is no observation to estimate it. For instance, if you estimate the range bias of station Xxx, but you have no measurement for station Xxx. Therefore, the problem is not observable and the “singular problem” error message occurs.
By reading the new behavior after switching to Levenberg Marquardt, I don’t think you are in this case. However, I think it is very important to verify.
These values are classical one. Maybe you can try to modify them in order to see what happen.