# Orbit Determination and Time Synchonisation for a constellation

Hello Orekit community,

I’ve been developing some scripts in Python based on Orekit to perform an ODTS based on a Least Squares Batch solution. The script is very similar to the GNSSOrbitDetermination.java from the orekit tutorials except some simplifications of the Rinex data and the reference generated orbit (many propagation effects are not present).

The scenario I am working with can be summarised as follows:

• 1 day of Rinex data sampled every 30s
• 15 stations providing Code and Phase observations (so far only Code observations are used)
• 1 Orbit and Clock parameters to be estimated
• The clock of one station is not estimated so all the clocks are referenced to this station
• Use of Iono-free measurements
• Troposphere corrected with the Saastamoinen model
• Clocks of all stations and Satellites are set to 0 at the estimation date

When I execute the code, the BLS seems to converge after 3 iterations providing a solution of the estimated parameters, the problem is that the orbit seems to be quite arround 700 m away of the true orbit, and the clocks seem to be off as well by quite a few orders of magnitude.

I’ve been performing some validations on some of the functions and so far I know thta:

• When propagating the initial orbit up to 1 day, the error is reasonably small (1-2 meters)
• The station coordinates in the ECI frames seems to be some cm away of the theoretical ones. I could discern that this is because the EOP parameters used by orekit were based on the bulletin A, whereas the simulated ones were based on the long term predictions. Actually the error observed in the propagator is also due to the usage of different EOP on the simulation and by orekit, so it is determined that the propagator is better in accuracy than 1-2 m;

Seeing that the propagator is not responsible of this bad behaviour and also seeing that the input stations coordinates are quite good, I have gone forward to check the theoretical measuremnt construction.

I have been able to obtain the residuals at the first iteration and what it’s interesting is that during the first observation data, the residuals are relatively small (around 1m), but after a while the residuals grow up to values around 1000 m.

I have taken a random epoch with a residual of arround 1500 m and what I could see is that this is linked to the clock ofsset difference between the station and the satellite multiplied by the speed of light.

Hence that means that when constructing the theoretical measurement the clock offset of the satellite and station is 0, and this difference gets absorved by the residual.

In the end it seems like Orekit estimates the satellite and station clock offset at the Orbit Determination date, and applies this same value for all epochs. In my head either:

• you estimate the clock offset at each epoch for which you have a measurement or;
• when applying the clock offset at a certain epoch you take into account its derivative with respect time to correct the clock.

Sorry for the long mail, but I have been working on that for quite some time and I cannot seem to find where is the issue.

Additionally I am quite surprised that my simulated scenario does not work since the orekit tutorial on orbit determination seems to work properly and with a good level of accuracy, although only 5 hours of data are observed which brings me to think if the dataset is so short that does not permit you to observe the clock drift effects on the OD.

Maybe I am missing something so any help will be appreciated.

Regards
Jordi

Hi @JordiS

You’re perfectly right! Orekit estimates the satellite and station clock offset at the Orbit Determination date, and applies this same value for all epochs. However, that’s not physically correct.

The short observation arc in the tutorial does not allow to observe the clock drift.

However, as you said, the station clock offsets are not constant in the orbit determination arc. To be more precise, they are close to stochastic parameters. In other words, there is no physical/mathematical reason to have the same clock offset between two measurements

One solution is, as you also said, to have one clock offset estimated per measurement. The main issue is that you will have a lot of parameters to estimate so the computation time will be very impacted.

A second solution is to have estimated parameters with a time validity (i.e. something close to what it is done for TimeSpamDragForce.java or TimeSpanTroposphericModel.java, but more generic using ParameterDriver class). For instance, having one clock offset estimated every 30 minutes per station. This is a functionality that we really want in Orekit. Unfortunately, we didn’t had time to implement it yet.

A third solution is to use the clock offset provided by IGS analysis center. For it’s version 11.0, Orekit will be able to read this type of file (currently available in the development version).

Best regards,
Bryan

Hello @bcazabonne,

Thanks for your reply. Ok so my suspicions are confirmed, at least now I know where the issue might come from.

Another workaround I was thinking is maybe use the clockdriftdriver in the range class so to modify the theoretical measurement with this parameter. I know it was not its intended purpose but it might solve my problem (also I have to try to use a shorter span and see if the metrics improve).

Additional in my script I’ve also implemented a Kalman filter in order to see if with an epoch by epoch estimator maybe things would improve since the measuremets parameters are estimated at each time. Nevertheless I still haven’t been able to converge to a propper solution (errors seem to be around several hundreds of meters), so here there must be something wrong as well…

Ultimately, the idea is to use the algorithms to also incorporate carrier phase measurements and estimate the ambiguity.

By the way, regarding this topic, for the classes I’ve seen it seems like orekit can only fix ambiguities to their physical values when using uncombined and undiferenced measurements, is that correct ? In other words, if I am estimating the iono-free ambiguity, is there a way for me to recover the integer ambiguity of the uncombined original measurement using orekit classes ?

Jordi

I’m not sure it will work because the clock drift driver is not use for the theoretical calculation of the range measurement in `Range` class. This parameter has been added for range rate measurement.
Therefore, if you want to use it for a range measurement, I think you will have to implement your own `Range` class (i.e. maybe by extending the existing class and overriding the `theoreticalEvaluation` method.

Are the evolution of the clock parameters stochastic? If not, maybe you can try to increase the noise values for these parameters.

First, maybe you can look at this post on the forum. However, I don’t think that it is currently possible in Orekit to recover an ambiguity from a “combined” ambiguity. To be more precise, I was not aware that it is possible Do you have a paper or a document presenting a method?

Bryan

I join in with Bryan’s answer. I also do not know the methods for restoring the ambiguity of the original measurements from the ambiguities found for ionofree measurements. Because using different combinations of measurements, we get rid of various noises, which allows us to distinguish the amount of ambiguity. In addition, as a rule, we combine measurements at different frequencies, whose ambiguities are not the same, and get some general ambiguity that is not the sum of ambiguities for several frequencies. I apologize if my English is not quite correct.
“Theory of carrier phase ambiguity resolution”. Article in Wuhan University Journal of Natural Sciences. June 2003. P. J. G. Teunissen;
“GPS Based Reduced-Dynamic Orbit Determination for Low Earth Orbiters with Ambiguity Fixing”. International Journal of Aerospace Engineering. Volume 2015. Yang Yang, Xiaokui Yue and Jianping Yuan.

Hello @bcazabonne and @Suslov46,

Ok I can try to extend the range class by overriding the theoreticalmeasuremnt method. @petrus.hyvonen since I am working in python do you have some examples on how I can perform such task ?

Yes the clock parameters have a deterministic part (linear model if you want) and a stochastic components.

The idea I was having in mind is based on the following dissertation:
Integer Ambiguity Ressolution for Multi-GNSS and Multi-Signal Raw Phase Observations by Florian Reckweg.

Basically it is true that if you use the iono-free observations and you set the ambiguity parameter to be estimated you will be unable to fix it to integer values later. Moreover the resulting combination is not anymore an integer. The thing is that you can use the Melbourne-Wubbena combination and Double Differencing to estimate and fix to integer values the individual ambiguities per each frequency (there are some relationships that can be exploited for such endeavour).

Regards
Jordi