Hello Orekit community,
I’ve been developing some scripts in Python based on Orekit to perform an ODTS based on a Least Squares Batch solution. The script is very similar to the GNSSOrbitDetermination.java from the orekit tutorials except some simplifications of the Rinex data and the reference generated orbit (many propagation effects are not present).
The scenario I am working with can be summarised as follows:
- 1 day of Rinex data sampled every 30s
- 15 stations providing Code and Phase observations (so far only Code observations are used)
- 1 Orbit and Clock parameters to be estimated
- The clock of one station is not estimated so all the clocks are referenced to this station
- Use of Iono-free measurements
- Troposphere corrected with the Saastamoinen model
- Clocks of all stations and Satellites are set to 0 at the estimation date
When I execute the code, the BLS seems to converge after 3 iterations providing a solution of the estimated parameters, the problem is that the orbit seems to be quite arround 700 m away of the true orbit, and the clocks seem to be off as well by quite a few orders of magnitude.
I’ve been performing some validations on some of the functions and so far I know thta:
- When propagating the initial orbit up to 1 day, the error is reasonably small (1-2 meters)
- The station coordinates in the ECI frames seems to be some cm away of the theoretical ones. I could discern that this is because the EOP parameters used by orekit were based on the bulletin A, whereas the simulated ones were based on the long term predictions. Actually the error observed in the propagator is also due to the usage of different EOP on the simulation and by orekit, so it is determined that the propagator is better in accuracy than 1-2 m;
Seeing that the propagator is not responsible of this bad behaviour and also seeing that the input stations coordinates are quite good, I have gone forward to check the theoretical measuremnt construction.
I have been able to obtain the residuals at the first iteration and what it’s interesting is that during the first observation data, the residuals are relatively small (around 1m), but after a while the residuals grow up to values around 1000 m.
I have taken a random epoch with a residual of arround 1500 m and what I could see is that this is linked to the clock ofsset difference between the station and the satellite multiplied by the speed of light.
Hence that means that when constructing the theoretical measurement the clock offset of the satellite and station is 0, and this difference gets absorved by the residual.
In the end it seems like Orekit estimates the satellite and station clock offset at the Orbit Determination date, and applies this same value for all epochs. In my head either:
- you estimate the clock offset at each epoch for which you have a measurement or;
- when applying the clock offset at a certain epoch you take into account its derivative with respect time to correct the clock.
Sorry for the long mail, but I have been working on that for quite some time and I cannot seem to find where is the issue.
Additionally I am quite surprised that my simulated scenario does not work since the orekit tutorial on orbit determination seems to work properly and with a good level of accuracy, although only 5 hours of data are observed which brings me to think if the dataset is so short that does not permit you to observe the clock drift effects on the OD.
Maybe I am missing something so any help will be appreciated.
Thanks in advance