SequentialBatchLSEstimator vs BatchLSEstimator

Dears,

I have a question regarding the two classes in the title. After collecting N measurements, if I build a BatchLSEstimator with a LevenbergMarquardtOptimizer as

optimizer = LevenbergMarquardtOptimizer(100.0, 1e-10, 1e-10, 1e-10, Precision.SAFE_MIN)
estimator = BatchLSEstimator(optimizer, propagatorBuilder)

when I call estimator.estimate() I am able to get the estimated SpacecraftState.

Taking the same N measurements, but building a SequentialBatchLSEstimator with a SequentialGaussNewtonOptimizer as

matrixDecomposer = QRDecomposer(Precision.SAFE_MIN)
optimizer = SequentialGaussNewtonOptimizer().withFormNormalEquations(True) 
optimizer.withDecomposer(matrixDecomposer) 
estimator = SequentialBatchLSEstimator(optimizer, propagatorBuilder)

when I call estimator.estimate() I always get a org.orekit.errors.OrekitException: unable to solve: singular problem. It is surely due to the fact that the rows in my Jacobian are almost identical, but how is it possible that I get a singular problem only in the sequential case? As far as I know, the matrixDecomposer should be equal in both cases i.e. a QRDecomposer with a singularity threshold equal to Precision.SAFE_MIN. Can you help me in understanding this point and in finding out a way to run the sequential filter without getting the reported error?

Warm regards,
Samuele

*update

I tried to build the BatchLSEstimator with a SequentialGaussNewtonOptimizer as I did with the SequentialBatchLSEstimator and I still get the same error. This means that the issue is in the optimizer. What is the reason of this behaviour? Why does the LevenbergMarquardtOptimizer not fall into a singular matrix problem?

Best regards,
Samuele

Hi @samuele

Did you set a covariance to the SequentialGaussNewtonOptimizer?

Regards,
Bryan

Hi @bcazabonne,

thank you for your reply!

I set it through optimizer.withAPrioriData(aPrioriState, covariance) and I run the simulation with either a null covariance object or an identity matrix. Am I missing something?

Best regards,
Samuele

The purpose of the SequentialBatchLeastSquares is to perform an orbit determination with an initial guess and an uncertainty associated to this initial guess represented by the covariance matrix. Orekit implementation is based on Chapter 10.5 of Astrodynamics and Application book of David Vallado.

I don’t think that an identity matrix is a good idea for the covariance matrix. Let take the example of an initial guess expressed in Cartesian elements. Having a covariance matrix represented by an identity matrix means that you have 1m of uncertainty in the position and 1m/s of uncertainty in the velocity. I don’t think it is something physically acceptable.

I think it is important to use the SequentialBatchLeastSquares only if you know the covariance matrix associated to the initial guess, at least a close approximation (e.g. an external source that computed the covariance, a previous least squares output). If not, I recommend to use a classical batch least squares.

Regards,
Bryan

Dear @bcazabonne,

thank you for your reply. I chose some feasible values for the covariance matrix and I don’t get the same error anymore. Of course I knew that an idendity matrix wasn’t physically “acceptable”, but I really didn’t think this would have caused a singular problem…

However, today I noticed another thing that I am still not able to understand. Starting from the result of the SequentialBatchLSEstimator.estimate() method I tried to check the Jacobian with SequentialBatchLSEstimator.getOptimum().getJacobian() and I found something I didn’t expect to find. The derivatives of my measurements with respect to the satellite state, satellite clock drift and satellite clock offset are of the form

                      dM/dy = [n_1, n_2, n_3, 0, 0, 0, c*t, c ]

where n_1, n_2 and n_3 are constant for all of the measurements, c is the speed of light and t is the time passed from the initial date to the actual measurement date. More precisely, in my case

                      dM/dy = [0.33, -0.84 , 0.43, 0, 0, 0, c*t, c ]

and this is what I give to EstimatedMeasurement.setEstimatedValue() and EstimatedMeasurement.setParameterDerivatives() in my measurement class, which is a simple extension of Orekit’s AbstractMeasurement (actually it is
very similar to Range). Considering 10 measurements collected every 4044 s, the method SequentialBatchLSEstimator.getOptimum().getJacobian() returns the following

{0.4755015197,-1.2009672502,0.6102395393,0.0,-0.0,0.0,0.0,418.2372460938}
{0.6885141067,-1.2741452065,1.4968398907,0.0,-0.0,0.0,1691351.423203125,418.2372460938}
{1.4482117258,-1.3651391222,3.0698064736,0.0,-0.0,0.0,3382702.84640625,418.2372460938}
{2.2355194583,-1.4036030628,4.5605973138,0.0,-0.0,0.0,5074054.269609375,418.2372460938}
{2.9323087851,-1.3929640368,5.8700864932,0.0,-0.0,0.0,6765405.6928125,418.2372460938}
{3.5424232447,-1.3384796292,7.0031671399,0.0,-0.0,0.0,8456757.116015624,418.2372460938}
{4.0823846724,-1.2438025861,7.9767102184,0.0,-0.0,0.0,10148108.53921875,418.2372460938}
{4.5663647276,-1.111608235,8.8066231628,0.0,-0.0,0.0,11839459.962421875,418.2372460938}
{5.0050786337,-0.9439763353,9.5061888554,0.0,-0.0,0.0,13530811.385625,418.2372460938}
{5.4065255275,-0.7425892402,10.0862897392,0.0,-0.0,0.0,15222162.808828125,418.2372460938}

and it is evident that the first three components increase as the number of measurements increases. I read on the documentation that this is a weighted Jacobian, but should I really expect something like this? Notice that the “sigma” I associated to the measurements is always the same.

Am I missing something obvious maybe? Or can it be a sort of normalization performed internally since the elements of my derivatives array are very different? I have both c and number lower than 1…

Thank you so much!!!
Samuele

Dear @bcazabonne,

I don’t know if it helps, but I have also noticed that the covariance matrix I get following the convergence is not symmetric and has negative values on the diagonal. This does not happen if I estimate the satellite state (PV) only (i.e. neglecting the clock components), where the results seem to be fine.

Thank you so much.

Samuele