FieldOfViewEventDetector Event time accuracy

#1

Hello,

I have been using the FieldOfViewEventDetector along a NumericalPropagator for sometimes now but I am wondering if there is a way to drop the accuracy of the access start/stop time.
The performance of the simulation seems to take a hit there with a lot of targets so I am trying to find ways to speed this up by reducing that precision. Currently it looks like I get timestamp down to the milliseconds but could I potentially get a precision down to the second instead or even more?

Thanks a lot for your guidance!
Regards.

#2

Yes, you can change the convergence threshold as follows:

  FieldOfViewDetector fovDetector = new FieldOfViewDetector(...).
                                    withThreshold(1.0);

Note that all the withXxx() methods return a new object, they do not modify the instance,
so you should really use the last instance returned. The API is follows the fluent design, so
if you want to change several settings (threshold, max check, max iter, …) you use the following
pattern:

  SomeDetector detector = new SomeDetector(...).
                          withTolerance(...).
                          withMaxCheck(...).
                          withMaxIter(...).
                          withHandler(...);

Beware not to confuse max check interval and tolerance. The max check interval does not affect accuracy, but may degrade performance a lot if set too small. It is used to ensure you do not miss close pairs of events (entry followed by exit of field of view), by forcing the propagator to check at least once every maxCheck seconds. Typical settings are about 60s for maxCheck (you are generally not interested in too short visibility ranges and accept to miss two events separated by less than one minute) and 1ms for events location accuracy. At least, maxCheck should be larger than tolerance, otherwise it does not really make sense.

#3

Thank you for your answer.

Yes I was already using maxCheck set to 5s (my access duration are roughly around 9-10s with targets separated roughly by the same duration)

I understand from the second part of your comment that basically there is no point for the tolerance to be higher than the max check which in my case would mean that the less accurate I need to be would be 5s?

Also do you think that exploiting the GPU for things like event-detection would be an option for the future? I am not sure today how easy/hard it is to call the GPU via java.

#4

I understand from the second part of your comment that basically there is no point for the tolerance no be higher than the max check which in my case would mean that the less accurate I need to be would be 5s?

No, what I failed to explain is that as the event switching function is at least called once every maxCheck, then it would not really make sense to ask for a tolerance that is maxCheck or more. This is because we first use maxCheck mainly to separate events occuring in pairs, and when we identify the sign of the switching function has changed, we start a root solving algorithm to more precisely locate it. The root solving algorithm will therefore be started with a search interval length corresponding to the two bracketing points already computed, which by construction will be at most maxCheck apart from each other. It tolerance is already larger than the length of the search interval, it is a waste of time.

In your case, setting maxCheck at 5s and tolerance at 1s seems fine. The root solving algorithm will do something useful and locate events in just a few points tests.

However, I am not sure this will improve your run time. At least with this settings, you will compute the function once every 5s and this alone may already be computing intensive.

#5

Yes I think you said what I meant.

With a max check of 5s, I am actually perfectly happy to have a threshold at 5s (pointless to set it at more than 5s). Where I could probably gain in computation time is maybe choosing another value for max-check.