Strange behavior testing code based on Orekit

I’d like to report a strange behavior I notice using Orekit.
I’m using a TLEPropagator and ElevationDetector. I have some JUnit tests and when I run them from IntelliJ I obtain little different result if I run all tests class together or run one by one.
Differences are very little (insignificant), but I prefer to segnalate them in order to understand if I’m doing something wrong or if there are other problems.

For example I have old TLE (when I wrote these tests):

1 25544U 98067A 17081.54501480 .00016717 00000-0 10270-3 0 9023
2 25544 51.6402 110.1919 0007360 324.9821 35.0847 15.54250968 8276

Observer place is:
longitude: 71.9329
latitude: -8.1161

If I run this test alone I obtain first increasing event with constant elevation value “10” at 2017-03-22T14:33:05.13485441092979Z.
If I run this test together with others (and so this isn’t the first), I obtain: 2017-03-22T14:33:05.13485441093178Z.

It seems to depend on whether one test was run before another, because first test run is always right.
I understand differences are very very little and insignificant, but I don’t understand them, I think result would be deterministic and should not depends on other test.

I’m using Java 8 and JUnit 4 on Linux.

If needed I can post a code snippet.

Thanks for your attention.

This is most likely due to the caching+interpolation feature used in some frames conversions.

Conversion between inertial and terrestrial frames involve huge computation of nutation effects, with several thousands of Poisson series terms. These effects have periods ranging from about 5 days for the shorter one to 18 years for the longer one if I remember correctly. This means these huge computations can be dramatically reduced by just computing the full terms on a sampling grid and interpolating between grid points. As an example, the transform between GCRF and CIRF computes the full Poisson series only once per hour even if you request the transform every millisecond. The number of points to establish the underlying polynomial model as well as the time step between grid points have been carefully studied and adjusted for each frame transform that uses this feature so the interpolation error is several order of magnitudes lower than the accuracy of the physical models. There are unit tests that check this near the error peak (which is due to lunar effects) ; the test tolerances are that without EOP max angular error is below 4.6e-12 radians and with EOP it is below 1.3e-13 radians, so these interpolation errors are acceptable.

The way the caching works is that the first time you request a transform at date t0, as the grid is empty it will compute several points around t0 to build the first grid (basically between t0 minus 3 hours and t0 + 3 hours) and cache it. Then, as you ask for more points, the same grid is reused as long as your points remain centered, and new points are added to the grid when your propagation requires it, globally just computing one new grid point for each hour of propagation. Older points are preserved in case you need them again (the grid for CIRF keeps up to one month worth of sampling points).

One consequence of this implementation is that if your first call is t0 you will get one grid, but if your first call is t0 + k hours + Δt, you will get a different grid, shifted by Δt and therefore very slightly different results, theoretically up to the interpolation error (which as explained previously is of the order of magnitude of 4.6e-12 radians). This is probably what you observe. Depending on the first test that is run, you get one grid or the other, and the results are off by a tiny bit.

In Orekit tests, we ensure test results are independent of test run order by clearing the cache at the start of each test. This is done using an internal hack that uses Java reflection. You can look at the Utils.clearFactories() method in the tests source directory (in Orekit tests, we really call Utils.setDataRoot() that itself calls Utils.clearFactories() ).

We have thought about another way to avoid this by enforcing grid points to be exactly on exact hours (for grids sampled hourly) in TAI time scale for example, instead of using the first t0 as the first grid point, but never really implemented it. This may be worth adding as this problem already hit several people. You can add a feature request on our issue tracker so we implement this.

Anyway, as you noticed the error is very small, it is related to interpolation error and several order of magnitude below precession/nutation models accuracy by design. So it is physically acceptable, but induces non-regression problems if tests do not ensure they start with an empty cache.

Great explanation Luc!

As of Orekit 10.1 there is another, officially supported way to make your tests and production code deterministic. Don’t use the default data context, instead create a new, separate DataContext for the code that must be deterministic.

As Luc pointed out any time you write code like FrameFactory.getTEME() you’re accessing a mutable static variable. Using a mutable static makes your program have different results depending on the order in which different parts of it were executed, which is exactly what you’ve noticed. When you create and manage your own instances of DataContext it gets rid of the mutable static and puts you in control of when data should be shared and when it shouldn’t.

If you don’t want to use the default data context, Orekit has a couple utilities to help you do that. In your unit tests, before anything else happens call DataContext.setDefault(new ExceptionalDataContext());. Then an exception will be thrown any time the default data context is used, so you can go modify it. Second Orekit provides a compiler plugin that performs a static analysis of your code. You can activate it by passing the option -Xplugin:dataContextPlugin to the compiler. If you’re using maven, something like:

        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <configuration>
          <showWarnings>true</showWarnings>
          <compilerArgs>
            <arg>-Xlint:unchecked</arg>
            <arg>-Xlint:deprecation</arg>
            <arg>-Xplugin:dataContextPlugin</arg>
          </compilerArgs>
        </configuration>

That will emit a compiler warning any time you have code that uses the default data context if the calling code is not annotated with @DefaultDataContext. Very similar to how @Deprecated works.

Thanks for your great and complete answers!