I am looking for the computation logic of the jacobian parameters during a numerical propagation scenario with an implemented SRP force model. My goal is to understand how the SRP drag coefficient (e.g Orekit REFLECTION_COEFFICIENT) selected as a parameter driver affects the computation of the jacobian parameters throughout the numerical propagation.
The overhaul process leads a bit into a rabbit hole down to the Hipparchus integrator processes.
Is there by any chance a reference paper the jacobian parameters computational model is based on?
The two basic engines involved in this computation are:
variational equations (see for example “Solving Ordinary Differential Equations - I : Nonstiff problems” by Hairer, Nørsett and Wanner)
algorithmic differentiation (see for example “Evaluating Derivatives - Principal and Techniques of Algorithmic Differentiation” by Griewank and Walther)
I can try to elaborate a bit. Hopefully I won’t say anything that is not true.
Basically when you select parameter drivers or the simple STM option, the propagator adds to the so-called primary variables (mass and PV-like coordinates) secondary ones to the integration. Those do not play any role with the adaptive stepsize if applicable. Those variables satisfy differential equations known as variational equations. They are computed using automatic differentiation for Cartesian coordinates (with some tricks to optimise performance), and converted in other type if necessary. That’s the acceleration method in ForceModel (the one with Field). There is then a whole system in place to rearrange these variables in a matrix when the user asks for it. And because Hipparchus integrators all have dense output, there is also a layer so that everything can be interpolated if required. So yeah overall the integration system split between the two librairies is not trivial, but it has many features. I’ll actually be talking a bit about that in the Orekit talk next week.
Note that if you want, you can actually perform automatic differentiation on the propagation itself, using FieldNumericalPropagator. This is usually less computationally effective because literally all calculations are done on non primitive types.
Thank you for the detailed feedbacks and sorry for this late response. @Serrof is there a presentation video record available of the mentionned Orekit talk?
If I understand well, the solutions of the variational equations are products of the secondary variables and the cartesian coordinates. Also the acceleration method of the SolarRadiationPressure class would output the same derivatives expected within the extracted Jacobian parameters.
Currently, Orekit offers the possibility to compute the STM on the fly by propagating the variational equations.
However, when there are dynamics discontinuities whose trigger does not depend only on time, one cannot just do that. For example, a thrust arc starts (with a non-zero value) at apogee.
If you were doing finite differences, all the event dates would be different (which is a problem numerically speaking because with variable steps, you end up with completly different integration history, creating noise).
The paper I’ve seen treating this the most rigorously is the following:
MARTINON, Pierre et GERGAUD, Joseph. Using switching detection and variational equations for the shooting method. Optimal Control Applications and Methods, 2007, vol. 28, no 2, p. 95-116.
It’s in the context of indirect shooting (the switches are linked to the adjoint variables and the cost function). And I’ve tried to implement what I believe is equivalent with 13.0 (with the most recent correction in this issue.
To go further, I’d like to add the proper treatement of the STM when internal detectors (e.g. from forces) return Action.RESET_DERIVATIVES and do not depend only on time. I’ve just started looking at NumericalPropagator and it might require API breaking changes (so wait for a major release), I cannot tell yet.
It’s the dependendance of the switch w.r.t. the parameters in the STM. The case RESET_STATE is harder to handle and I’m not sure how to deal generically with it yet.
Here is the reasonning in the case I was talking about.
Let us consider that a one-dimensional (without loss of generality) state vector x is propagated thru f_a:\mapsto f_a(t,x) first, until a switch occurs at t^* and the dynamics is then given by f_b\mapsto f_b(t,x). Let us assume that the switch is defined by the root of g:(t,x)\mapsto g(t,x).
Let us forget for one moment about the STM.
Before the switch, one has x(t)=x_0+\int_{0}^{t} f_a(\tau, x(\tau))d\tau
Let us now introduce an infinitely small variation of the switch time \delta t. It comes that immediately before it, the state equals: x(t^*) + \delta t f_a(t^*, x(t^*)) + o(\delta t)
To go thru the switch, one must also consider that deviation with the new dynamics so: x(t^*) + \delta t f_a(t^*, x(t^*))- \delta t f_b(t^*, x(t^*)+f_a \delta t) + o(\delta t)
Now, let us go back to the problem of interest, when we consider that we had the STM i.e. \partial x / \partial x_0 at t^{*-} (immediately before the switch, for instance via integration of the variational equations of f_a) . We can evaluate x(t^*+\delta t) (with the previous equation) and then g(t^*+\delta t, x(t^*+\delta t)) as a Taylor expansion of \delta t and \delta x_0 with automatic differentiation. Next, we swap time as an independent variable with the event function (it’s called map inversion by Berz for arbitrary order, but in our case it’s just inverting the Jacobian matrix). And we plug this new expansion into the one of x(t^*+\delta t) to obtain one in \delta g and \delta x_0, which boils down to multiplying matrices. Finally, by enforcing the variation in the event function to vanish, because we stay on this manifold g=0 (and it’s just discarding a column in the previous product), we have the result \partial x/ \partial x_0 at t^{*+}.
By the way if f_a=f_b (no switch) and/or if g is independent of x, all these algebraic manipulations cancel out, which makes sense.