Partial derivatives of a range measurement with respect to the initial PV coordinates

Hi @MaximeJ,

thank you a lot for your helpful reply. Now I will try to explain you a bit more what my code is about.

It is a code about pulsar navigation, a field which is quite complex to describe in a few lines: our aim is to exploit the periodic signals coming from pulsar stars to perform an orbit determination. Pulsars are very far away and their line-of-sight directions can be easily considered to be constant in the Solar System.
The phase (e.g. that of the signal maximum) of the received signal depends on the position of the satellite along the line-of-sight of the source and refers to a specific point on the satellite orbit. All of the successive received periods (each about 10 ms letā€™s say) must be adjusted during the observation shifting the phase of the successive received periods to match the phase of the signal received at the specific point, in which you estimate the distance between this specific point and the SSB. WIthout such ā€œfoldingā€ (or sum) procedure, the SNR of the received signal is not sufficient. Then, you make your SSB-SAT distance estimation at the end of your signal integration, referring to the beginning of this orbit segment (i.e. at the specific point). Thus, the theoretical value of this distance is always a function of the sole specific point and does not depend on the satellite velocity. Of course, the derivative of the current state y with respect to the initial state y0 will depend on the initial velocity! But as far as I understand this is already considered in AbstractBatchLSModel.fetchEvaluatedMeasurements that I had the opportunity to study in deep while re-writing the whole OD process.

Indeed, a pulsar-based navigation system does not pretend to achieve a precision of 10 m. The best result achieved so far is in fact a positioning accuracy of about 5 km. But someone like me is looking to boost such performance :slight_smile:. Such low-accuracy measurements make my parameters setting a bit harder. I have put the estimation_position_scale = 1000 m, estimator_convergence_threshold = 1.0, and in the creation of the measurement I give a sigma = 2000.0 and a weight which is equal to 1/m, where m is the number of measurements. Do you think these values are reasonable?

For what concerns the Gauss-Newton problem, I do not find anymore a singular problem if I choose the QRDecomposer singularity threshold to be very small (about 1e-20) and the formNormalEquations parameter of the GaussNewtonOptimizer to be True (leading thus to the use of the pseudoinverse of the Jacobian). Are those too strong assumption in your opinion?
Setting those values, I performed an OD covering half of the orbital period (180Ā° anomaly separation) in which I collected 50 measurements, with a sigma of about 2000 m each. The estimated orbit is very similar to the true one and if I make a plot of the two orbits together they are almost aligned. This ā€œalmostā€, however, means a large number of meters. These are the distances (3D norm) between the real initial position and the initial position of the estimated orbit in the successive least squares iterations:

iteration #6 = 84 km
iteration #7 = 46 km
iteration #8 = 106768 km (the inclination drastically changed hereā€¦)
iteration #11 = 12879 km
iteration #12 = 12979 km
iteration #20 = 13244 km
iteration #26 = 12794 km

It seems in fact that the estimator is not able to convergeā€¦ my class NewRange seems to be correct to me, it is very simple. Maybe thatā€™s something wrong with the setting of the estimator parameters? Some hints would be very appreciated at this point :laughing:

Let me know if there is something else I can report you that would help!

Best,
Samuele

Hi @samuele,

Yes you are right.

Maybe you could increase the estimation_position_scale to 5 to 10km since itā€™s probably the accuracy you will achieve at the end (according to " a positioning accuracy of about [5 km]").
I would set the weights to 1. This parameter is a bit misleading (and not very useful) in Orekit and we actually always set it to 1. The problem is already weighted by the sigmas of the measurements.
That being said, since your measurements all have the same weights it probably wonā€™t change your results.

I think it is ok, normal equations should be equivalent to straight Jacobian pseudo-inverse and a too-high singularity threshold was probably the reason why you were locked with a ā€œsingular problemā€.

Indeed it does not convergeā€¦ :thinking:Could you share your code ? Itā€™s hard to find out the issue without being able to actually debug the code.

Best,
Maxime

Hi @MaximeJ,

I have tried to use the LevenbergMarquardtOptimizer() instead of the GaussNewton one and the results are better (about a order of magnitude). But the positioning accuracy still does not converge and the positioning error increases quite fastā€¦ this is the code I call every l measurements along the orbit. kepler is the propagator associated to my guess orbit, Oj is the array containing the observed measurements and dates_array contains the AbsoluteDates associated to the measurements. NewRange is the class I reported in this topic some days ago i.e. it is my extension of the class PythonAbstractMeasurement.

prop_min_step = 0.001 # s
prop_max_step = 0.03 # s
prop_position_error = 1.0 # m

# Orbit determinator parameters
estimator_position_scale = 10000.0 
range_weight = 1.0  

initialDate = dates_array[0]

# get the satellite state vector at the initial point 
x_guess = kepler.propagate(initialDate).getPVCoordinates().getPosition().getX()
y_guess = kepler.propagate(initialDate).getPVCoordinates().getPosition().getY()
z_guess = kepler.propagate(initialDate).getPVCoordinates().getPosition().getZ()
vx_guess = kepler.propagate(initialDate).getPVCoordinates().getVelocity().getX()
vy_guess = kepler.propagate(initialDate).getPVCoordinates().getVelocity().getY()
vz_guess = kepler.propagate(initialDate).getPVCoordinates().getVelocity().getZ()    

position = Vector3D(x_guess, y_guess, z_guess) # first entry of the state vector for the initial position (at the initialDate)
velocity = Vector3D(vx_guess, vy_guess, vz_guess) # first entry of the state vector for the initial velocity (at the initialDate)
pvCoordinates = PVCoordinates(position, velocity)  

reference_orbit = KeplerianOrbit(pvCoordinates, inertialFrame, initialDate, Constants.WGS84_EARTH_MU) # orbit definition line

integrator = DormandPrince853IntegratorBuilder(prop_min_step, prop_max_step, prop_position_error) # args: min_step (s), max_step (s), position_error (m)

propagatorBuilder = NumericalPropagatorBuilder(reference_orbit, integrator, PositionAngle.MEAN, estimator_position_scale)

# Construct the estimator
optimizer = LevenbergMarquardtOptimizer()
estimator = BatchLSEstimator(optimizer, propagatorBuilder)

estimator.setParametersConvergenceThreshold(estimator_convergence_thres)
estimator.setMaxIterations(estimator_max_iterations)
estimator.setMaxEvaluations(estimator_max_evaluations)

observableSatellite = ObservableSatellite(0)

for i in range(0, l):

    sat = ArrayList()
    sat.add(observableSatellite)
    
    psr_range = NewRange(dates_array[i], float(Oj[i]), float(sigma), float(range_weight), sat)
    estimator.addMeasurement(psr_range)


# Estimate the orbit (could take some time depending on l and on the orbit determinator parameters)
print('\n\n-----------------ESTIMATING -------------------')
estimatedPropagatorArray = estimator.estimate()
print('\n\n---------------- FINISHED --------------------\n\n')

estimatedPropagator = estimatedPropagatorArray[0]
estimatedInitialState = estimatedPropagator.getInitialState()
actualOdDate = estimatedInitialState.getDate()

estimatedOrbit_init = estimatedInitialState.getOrbit()

I can see that the propagator returned by the estimate() method is always associated with a minimization of the computed-measurement coefficients and maybe it is the accuracy I can achieve with such noisy measurementsā€¦ I will try to play a bit with the number of the measurements etc to see what happens. However, let me know if you notice some errors in my code, it would help me a lot.

Thanks as always!
Samuele

Hi @samuele,

Glad to see that you had more luck with Levenberg-Marquardt instead of Gauss-Newton.
This is bit weird, though, since LM mainly brings a bigger convergence radius with respect to GN.
Since you start of from the desired solution you shouldnā€™t need the bigger convergence radius that LM provides.

I donā€™t see any obvious error in the code you provided.
Still, if you start from the solution your OD should converge nicely.
You are using simulated measurements right? Have you tried simulating with smaller errors?

Maxime

Hi @MaximeJ ,

I tried using simulated measurements with smaller errors and it was definitely the cause of this unwanted behaviourā€¦ so, thank you so much! This would have been very difficult without your help. I consider the topicā€™s question to be answered.

I have a last question for you but I donā€™t know if itā€™s better to ask it here or to create another topic. I put it here before selecting the ā€œsolutionā€ in your replies because I donā€™t know if itā€™s still possible to reply on a closed topic. If you think itā€™s better to open another topic, I will create a new one :slight_smile:

My question is: I would like to add the satellite clockā€™s bias and drift to my problem. Hence I would like to estimate these two parameters along with the 6 of the state. Mathematically speaking it is quite easy since the effects of these parameters lead to a redefinition of my range measurement in which I have to add a (bias + drift*t)*c component to the range measurement. But what is the easiest way to add them to my NewRange class I reported in this topic (21 days ago)? I imagine I will have to manage those parameters through the ParameterDriver class but I havenā€™t studied too much this class yet so maybe there is a more convenient way to add the clock parameters to my classā€¦

All the best,
Samuele

Hi @samuele,

Youā€™re very welcome. I should have started with this simple solution!

I think it is possible (never actually tried it though).

If you look into the Orekit Range class constructor you will see that the clock bias (called clock offset in Orekit) is added from the ObservableSatellite.
The clock drift works the same way.
So you will have to do the same in your NewRange class.
Then in the theoreticalEvaluation method you will have to set the derivatives of your clock bias and drift manually.
In Orekit Range.theoreticalEvaluation it is done within this loop.
There is no explicit computation of the derivatives since they are computed by automatic differentiation (hereā€™s an article about automatic differentiation in Orekit if youā€™re interested).

All the best,
Maxime

Hi @MaximeJ ,

OK, got it. I will follow what it is done for the clock offset in the Range class constructor!

Best,
Samuele