thank you for your reply. I chose some feasible values for the covariance matrix and I don’t get the same error anymore. Of course I knew that an idendity matrix wasn’t physically “acceptable”, but I really didn’t think this would have caused a singular problem…
However, today I noticed another thing that I am still not able to understand. Starting from the result of the SequentialBatchLSEstimator.estimate() method I tried to check the Jacobian with SequentialBatchLSEstimator.getOptimum().getJacobian() and I found something I didn’t expect to find. The derivatives of my measurements with respect to the satellite state, satellite clock drift and satellite clock offset are of the form
dM/dy = [n_1, n_2, n_3, 0, 0, 0, c*t, c ]
where n_1, n_2 and n_3 are constant for all of the measurements, c is the speed of light and t is the time passed from the initial date to the actual measurement date. More precisely, in my case
dM/dy = [0.33, -0.84 , 0.43, 0, 0, 0, c*t, c ]
and this is what I give to EstimatedMeasurement.setEstimatedValue() and EstimatedMeasurement.setParameterDerivatives() in my measurement class, which is a simple extension of Orekit’s AbstractMeasurement (actually it is
very similar to Range). Considering 10 measurements collected every 4044 s, the method SequentialBatchLSEstimator.getOptimum().getJacobian() returns the following
and it is evident that the first three components increase as the number of measurements increases. I read on the documentation that this is a weighted Jacobian, but should I really expect something like this? Notice that the “sigma” I associated to the measurements is always the same.
Am I missing something obvious maybe? Or can it be a sort of normalization performed internally since the elements of my derivatives array are very different? I have both c and number lower than 1…
Thank you so much!!!