I am trying to understand the difference between the three different propagation modes available. I believe I understand the slave mode as it is the most simple of the three, but I am not sure I understand the other two, especially on what concerns their impact on the results obtained. Here are some of the questions I have:
If I for example was to run three times the same propagator with the same conditions using each one of the modes, would I obtain the exact same results?
Does it make any sense to use the master mode for an analytical propagator?
When I run the SGP4 propagator in the slave mode the performance is much slower than when I run it with the ephemeris mode and I do not understand why, in theory, it should be the second that takes longer, right?
How does the ephemeris generation mode work? It says that it calculates and stores all intermediate results, what is all? between any two points in time, there is an infinite number of points in between, which step-size is it using?
Hi Iliass, here some elements to help understand the behaviour, which is often not intuitive.
For analytical propagators, yes, you should get the same results. For numerical propagators, not, you should see differences as you force steps to end at some times that may be different from the steps the underlying integrator would use.
Yes, mainly for the sake of maintainability. Letting the propagator handle time evolution, once user is accustomed to it, leads to simpler code as the code in the step handler concentrates on what to do at one point of time. This is the same reason that say events handling is separated from the main loop in events handlers: you have several independent small pieces of code instead of one big loop that manages everything at once. It will also allow switching from analytical to numerical propagators more easily if needed later.
No, because ephemeris generation is handled specially with all analytical propagators. In fact as these propagators don’t have a notion of “current” time, they only know the time of last reset (i.e. only initial time if you never reset the propagator, which is indeed the case for SGP4 that don’t support reset) and can propagate to any time using only this initial time. The ephemeris generation uses this property. It does not really store any intermediate state, it just keep a reference to the underlying analytical propagator. Basically, ephemeris generation in analytical propagator is no-op, but it is useful to have it for consistency with other (semi-analytical and numerical) propagators. The computation time time that is used in slave mode for SGP4 and seems to be saves in ephemeris generation is in fact only spent later on: when you will use the ephemeris.
For analytical propagators, as I wrote before the ephemeris just stores a reference to tha analytical propagator. For semi-analytical and numerical propagator, the ephemeris generation registers to the propagator a specific step handler. When the propagator will call this step handler during the run, this specific handler will then get a step interpolator valid throughout last step, but instead of using it for performing computation on the current step, it will store it in a list for later use. At the end of propagation, you then have available a collection of interpolators. When you ask for the state at a specific time, the ephemeris will look in its list to see which interpolator is valid for this time, then use this interpolator to interpolate the state at the specific time you asked for. This mean that if you use a propagator in master mode with a step handler or if you use it in ephemeris mode and then use the ephemeris with the same step interpolator, then you should get the same results, because in fact your step handler will see the exact same interpolator in both cases.
Hello Luc, and thank you very much for your answer, I understand much better the differences now. However, I am now wondering how does the event detection work for the analytical propagators, is a loop where an “artificial” timestep is introduced necessary or is propagating directly from one initial state to a final date with no intermediate states enough? f that is the case, how does the event detector calculate anything?
Please note that since the beginning of this thread, two important changes occurred in the current development version, which will be released soon as Orekit 11.0.
The first change is that we have removed the notion of propagation modes. See this discussion for details. The change mainly means that you can have any number of step handlers, including 0 set up in the propagator and in your setting you can mix at will step handlers with different step sizes and variable step size step handlers. No step handlers at all is equivalent to what was known as slave mode.
The second change is that the scheduling of step and event handlers has been fixed (it has bothered us for years) so now if an event occurs at the end of a step, the step handlers is called first and the event handler called afterwards.
Yes, this is the maxCheck setting in the EventDetector. As event detectors are generally implemented by extending the AbstractDetector class, the classical way to set up a custom max check is as follows:
SomeDetector detector = new SomeDetector(...).
Note that the withXxx methods rely on the fluent API, i.e. you must use the object returned by the withXxx() method and chain them as shown.
The principle of the maxCheck setting is that regardless of how the propagator runs, the g function that defines the event will be called at least once every maxCheck seconds. If the propagator is a numerical propagator that already has an internal step size, the g function may be called more often if this step size is small. If on the other hand the propagator is an analytical propagator without any notion of step size or if the propagator is a numerical propagator with a large step size, then this setting will trigger intermediate calls to the g function. If the function changes sign between two calls, then it means an event occurs in between and a root finding algorithm is run to locate it accurately, down to the precision set in the threshold setting.
The maxCheck value is chosen according to the separation between events in a pair. It is not perfect, of course, and no practical implementation can be perfect: if some specific event detector triggers a pair of events with a femtosecond separation, you will probably miss it (but it is probably not a serious problem since too close events generally cannot be used for anything practical). As an example, when you set up an ElevationDetector for computing the ground control schedule for telemetry and telecommand, you don’t care about cases when the satellite raises above ground and sets back below horizon less than 60 seconds later, you will not have time to do anything and signal will probably be poor, so you just use 60s for maxCheck and don’t care if you miss such close pairs of events.
There are default settings for maxCheck for most events, but they are only arbitrary defaults, you should use your own settings depending on your use of events. Using a very small maxCheck (say one second or below) is often a bad idea, it will just waste time.
Note also that maxCheck is unrelated to tolerance. The first one is used just to bracket roots and trigger root search whereas the second is used for the convergence check once the root search has been triggered. You can have a maxCheck set to 60s or even one hour (if you are only interested in widely separated events) but still have a tolerance set to 1µs if you want.