I am currently doing Orbit determination for GNSS Satellites. I use a batchLSEstimator. I currently use an arc of observables containing past measurement. To reduce my error I gave more weight to the most recent measurements.
I did this giving to the current date measurement a weigth W and then applying a deweighting factor depending of the time of the measurement : exp(-duration/Constant) i.e. applying to the measurement a weigth of W*exp(-duration/Constant).
It actually worked and reduced my errors. Howerver, sometimes the algorithm crashes and gives a sigular matrix error. I think this is because the differences between the weigths can be too high…Changing the values of W or Constant works sometimes to run the code. Do you know if it comes from there ? Is there a way to prevent that ? Or at least to analyse if the algorithm will crash and change the weigth in this case?
It does probably come from here, my guess is that your matrix gets ill-conditioned when your deweighting factor gets too small.
We do generally always set the weigths to 1. since Orekit also naturally weights the measurements with 1/σ.
Your method for reducing the impact of old measurements looks elegant though.
If you are using a GaussNewtonOptimizer, have you tried lowering the singularityThreshold of your Matrix Decomposer ?
If you want some of us to try and help you further, I think we will need a runnable example with a failing test case.