Created SphericalPolygonSet gives NullPointerException with FootprintOverlapDetectorh

Hello,
my objective is to detect the entries and exits of the field of view on the main continents.

So I have a list of polygons which I create from a geojson file I got and I try to union them to have one and then use the FootprintOverlapDetector to detect them but it seems like it cannot make it work.

I’ve seen on another topic that this shouldn’t work with a lot of data points so there’s only 348.

I’m getting nullpointer on the creation of the FootPrintOverlapDetector, I’ve tried to create the polygons in different ways but it always results with the same issue. I can say for sure that when creating only one SphericalPolygonSet (4 points) I have no issue.


Frame   earthFrame = FramesFactory.getITRF(IERSConventions.IERS_2010, true);
OneAxisEllipsoid  earth = new OneAxisEllipsoid(
              WGS84_EARTH_EQUATORIAL_RADIUS,
              WGS84_EARTH_FLATTENING,
              earthFrame);
Frame satelliteFrame = FramesFactory.getGCRF();
AttitudeProvider pointing = new NadirPointing(satelliteFrame, earth);
SimpleDateFormat sdf =
        new SimpleDateFormat("dd-MM-yyyy HH:mm:ss.SSS Z");
Date dateSatelliteInitialDate = sdf.parse("01-01-2021 00:00:00.000 UTC");
AbsoluteDate satelliteInitialDate =
        new AbsoluteDate(dateSatelliteInitialDate, TimeScalesFactory.getUTC());

Orbit initialOrbit = new KeplerianOrbit(
        EIGEN5C_EARTH_EQUATORIAL_RADIUS + 0.5 * (800e3), // 800e3 = 2*perigee
        (400.1e3-400e3)/(400.1e3+400e3)), // (apogee - perigee) / (apogee + perigee)
        FastMath.toRadians(90),
        FastMath.toRadians(0)
        FastMath.toRadians(0),
        FastMath.toRadians(0),
        PositionAngle.TRUE,
        satelliteFrame,
        satelliteInitialDate,
        EIGEN5C_EARTH_MU
    );

Propagator propagator = new EcksteinHechlerPropagator(
        initialOrbit,
        pointing,
        25.0,
        EIGEN5C_EARTH_EQUATORIAL_RADIUS,
        EIGEN5C_EARTH_MU,
        EIGEN5C_EARTH_C20,
        EIGEN5C_EARTH_C30,
        EIGEN5C_EARTH_C40,
        EIGEN5C_EARTH_C50,
        EIGEN5C_EARTH_C60
    );

// this values are computed before
DoubleDihedraFieldOfView imagerFieldOfView = new DoubleDihedraFieldOfView(Vector3D.PLUS_K, Vector3D.PLUS_I, 0.02742857132361516, Vector3D.PLUS_J, 0.03428571415451895, 0);
double sampling_step = 100000. ;
double hyperplaneThickness = 1.0e-10;

FeatureCollection featureCollection = new ObjectMapper().readValue(getFileFromResourceAsStream("small_continents.geojson"), FeatureCollection.class);
SphericalPolygonsSet sphericalPolygonsSet = null;
      List<GeodeticPoint> geodeticPoints = new ArrayList<>();
    for (Feature feature : featureCollection.getFeatures()) {
      if(feature.getGeometry() instanceof Polygon) {
        for (List<LngLatAlt> coordinates : ((Polygon) feature.getGeometry()).getCoordinates()) {
          for(LngLatAlt lngLatAlt : coordinates) {
            geodeticPoints.add(new GeodeticPoint(FastMath.toRadians(lngLatAlt.getLatitude()), FastMath.toRadians(lngLatAlt.getLongitude()), 0));
          }
        }
        GeodeticPoint[] geodeticPointArray = geodeticPoints.toArray(new GeodeticPoint[]{});
        if(sphericalPolygonsSet == null) {
          sphericalPolygonsSet = EllipsoidTessellator.buildSimpleZone(hyperplaneThickness, geodeticPointArray);
        }else {
          sphericalPolygonsSet = (SphericalPolygonsSet) new RegionFactory<Sphere2D>().union(sphericalPolygonsSet, EllipsoidTessellator.buildSimpleZone(hyperplaneThickness, geodeticPointArray));
        }
      }
    }

FootprintOverlapDetector targetLandDetector = new FootprintOverlapDetector(imagerFieldOfViewArray, earth, sphericalPolygonsSet, sampling_step).withHandler((s, detector, increasing) -> {
     System.out.println("hello");
      return Action.CONTINUE;
    });
    propagator.addEventDetector(targetLandDetector);

small_continents.geojson (61.1 KB)

I am not sure, but I think your data file has some self-intersecting boundary problems.
If I upload it to an online GeoJson viewer, and look around Alaska,
there is a self-intersection near the south coast of Alaska. you could try to fix this and look if
it works better.

Another point is that in your specific use case, I would not recommend using a single detector with all land masses represented as a polygon set, but would rather set a few detectors, each one with parts in the same area. You can still have several islands (like for example all islands in the arctic ocean north of Canada), but having a single polygon that spreads all over the Earth will be more computationally intensive. The reason for this is that FootprintOverlapDetector was designed with small areas in mind (say one country, or perhaps one continent like Australia). As checking the crossing of the boundary is costly, the detection is done in two steps. When the zone of interest is built, the smallest spherical cap that completely surrounds it is computed, and its center and radius are saved. We know by construction that all points in the zone of interest are inside the spherical cap, and hence we can quickly discard many points without checking the accurate geometry. We only check the accurate geomery when the test point (i.e. the sub-satellite point) is in the spherical cap: there we need to do the full computation. If we have a zone of interest that extends in all directions, the smallest enclosing cap is in fact very large, and we don’t save any time: most of the points lie within this big cap and we have to perform full computation all the time.

So if you split you zone into several detectors, each one will have a small enclosing cap, and when your satellite is for example above Africa, the detector for Greenland will find very quickly (one dot product and one angle check) that the satellite cannot possibly be above Greenland, and will not waste time computing the accurate geometry.

You may even split contiguous land masses (say Africa and Eurasia, despite they have a common boundary at Suez canal) into seperate zones and separate detectors, but you may have in this case some small glitches with satellite leaving one zone and entering the second zone and therefore having a pair of exit/entry events even if you combine all your detectors with BooleanDetector.orCombine.

Right, I was using another viewing tool which didn’t show that, I’ve updated the file to remove this intersection but I still get a NullPointer.
small_continents.geojson (64.0 KB)

I’ve also split the geojson into multiple ones (one per continent) and I don’t get this NullPointerException when I use the data from only one continent bu I get a NullPointer after combining some of them.

So I guess a working solution would be, like you said, to have detectors for each continent and then combine them with BooleanDetector.orCombine., right?

Yes, and it would be more efficient too.

So I’m diverging from the NullPointerException to getting really different results when I change the start date of the propagation.

When starting at 2020-12-31 22:30:00 UTC, I get "start seeing at 2020-12-31T22:39:99.983 and it stops at 2021-01-01T15:57:23.809 which doesn’t make sense but let’s go with it.
When starting at 2020-12-31 23:30:00 UTC, I get "start seeing at 2021-01-01T01:19:01.389 and it doesn’t stops until the end of the propagation.
How is that possible?

I run the same code, see bellow for more information.

imagerFieldOfViewArray = new DoubleDihedraFieldOfView(Vector3D.PLUS_K, Vector3D.PLUS_I, 0.012928572552831369, Vector3D.PLUS_J, 0.01616020925570368, 0) ;
List<FootprintOverlapDetector> listOfDetectors = new ArrayList<>();
SphericalPolygonsSet sphericalPolygonsSet = null;
FeatureCollection featureCollection = new ObjectMapper().readValue(getFileFromResourceAsStream("americas.geojson"), FeatureCollection.class);
//this represents all continents
for (Feature feature : featureCollection.getFeatures()) {
    if (feature.getGeometry() instanceof Polygon) {
    // this represents all vertices of the current continent
    List<GeodeticPoint> geodeticPoints = new ArrayList<>();
    for (List<LngLatAlt> coordinates : ((Polygon) feature.getGeometry()).getCoordinates()) {
        for (LngLatAlt lngLatAlt : coordinates) {
        geodeticPoints.add(new GeodeticPoint(FastMath.toRadians(lngLatAlt.getLatitude()), FastMath.toRadians(lngLatAlt.getLongitude()), 0));
        }
    }
    GeodeticPoint[] geodeticPointArray = geodeticPoints.toArray(new GeodeticPoint[]{});
    sphericalPolygonsSet = EllipsoidTessellator.buildSimpleZone(hyperplaneThickness, geodeticPointArray);
    }
}
FootprintOverlapDetector targetLandDetector = new FootprintOverlapDetector(ConstellRPayloadTIR.imagerFieldOfViewArray, earth, sphericalPolygonsSet, sampling_step).withHandler((s, detector, increasing) -> {
    targetLandDetectorInfo.put(s.getDate(), (increasing ? "Start seeing land at " : "Stops seeing land at "));
    return Action.CONTINUE;
});
propagator.addEventDetector(targetLandDetector);

americas.geojson (18.4 KB)

Could you do the following:

  1. not adding the last point from geojson polygon (I think geojson mandates that the first and
    last points are identical so the loop is closed, whereas buildSimpleZone closes the area
    by itself
  2. check the polygons are all in counterclockwise direction (this can be done by printing
    sphericalPolygonsSet.getSize(), if it is greater thatn 2π steradians, it is more than one
    half of the Earth surface and so you may be dealing with the complement of the polygon
    you want
  3. plot the sub-satellite point, highlighting the ones at events occurrence, to see if this
    makes sense
  4. provide a self-contained complete test case (with orbit and everything) so we can run it
    on our side
  1. I’ve removed the last point without a change.
  2. The size is ~0.9 so it’s what we’re expecting, right?
  3. I checked and it doesn’t make sense.
  4. See the attached file
    testCase.zip (16.9 KB)

Thanks for the additional information.

OK, at least the geometry code did not choke with either configurations.

Yes, the region is correctly modeled

I had a look at it and think I found the culprit.

First, two unrelated remarks on your code.

The first remark is that the expression for eccentricity is wrong. The denominator should be perigee_radius + apogee_radius and not perigee_altitude + apogee_altitude, you miss twice the Earth radius and your eccentricity is several orders of magnitude too large.

The second remark is that you should avoid using an explicit loop for propagation. We strongly recommend using master propagation mode (see https://www.orekit.org/site-orekit-development/architecture/propagation.html#Propagation_modes). This means the content of you current loop should be in a separate function named handleStep in a class implementing OrekitFixedStepHandler registered as a step handler to the propagator. Then the propagator should be launched just once (no loop) with a target date set to the end of your time range. The propagator will take care of the loop by itself. This is much more efficient than looping by yourself as the propagation starting overhead occurs only once and as propagator may choose larger steps than the fixed output step used by the handler (but this is only meaningful for numerical propagation so in your case this does not happen). The stability of event detectors is also much better with one smooth sweep rather than restarting everything.

Now lets analyze the core of the problem.

You have a very tiny field of view (about 10 kilometers on ground) and a very large area of interest (all Earth land masses). FootprintOverlapDetector was designed for small area of interest (country size) and medium-size field of view. The detector implies computing how two shapes defined by irregular generatrix (the boundaries) intersect on the surface of the ellipsoid. One is the zone of interest, and if Earth were spherical, the generatrix would come out of its center, the other one is the field of view and the generatrix would come from instrument focal point, so the two centers are different. This computation is difficult in the general case. On the other hand, one thing we can do efficiently with Binary Space Partitioning Tree (which is how regions are modeled in Hipparchus) is to check if one single point is inside, outside, or on the boundary of an arbitrary region.

So given our design use case (small area of interest on ground), we implemented FootprintOverlapDetector was to sample the zone of interest using a user-provided step size. This means the area is rather managed as a big collection of 3D points. This collections is obtained by sampling the boundary itself, and sampling the interior using EllipsoidTesselator to create a grid. Then the detectors just tests each points in the collection with respect to the instrument field of view.

One drawback of this design choice is that the sampling of the area of interest should be smaller than the instrument field of view, so that when you are pointing at a place in the area of interest, you should at least have one of its sampling points in the field of view. This is clearly not explained in the javadoc, so it is normal you did not identify this drawback.

In your case, as your area of interest is so large, you use a sampling of 100km, which would be fine for a field of view that would cover an area of about 200kmx200km, but is clearly not suited for a field of view about 10km wide. In fact, the detector may well pass between sampling points since they are so widely separated with respect to the size of the field of view.

You could of course reduce the sampling size to below the size of your field of view, but given the sheer size of your area of interest and the tiny field of view, this would amount to millions of sampling points. With a 100km grid, you already have about 4600 sampling points on americas only. Using a sampling step of 2km in each direction would multiply this by the square of 100/2, i.e. 11.5 millions points! So this theoretical solution is not practical.

Another solution would be to design a different detector, with reversed design choice. As your field of view is small, it is the field of view that should be sampled, not the zone of interest. In fact, this is quite similar to what is done in the Rugged geolocation library, which is based on Orekit and used operationally in the image processing facility of the Sentinel 2 mission. It processes billions of pixels each day. The computation performed in Rugged are that for each pixel (about 30000 pixel in the push-broom, sampled about every millisecond), we compute the intersection with digital elevation model, so this is already far more advanced and computationally intensive that just checking if the field of view see land mass or not.

So I would suggest to create a new detector, based on field of view sampling. It would even be possible to not sample all pixels in the instruments but something like one pixel each 5 rows and 5 columns, which would correspond on ground of a grid with a step size of 0.1km (much finer than the previous one), but only on a 10km x 12km area), so only 102*128=13056 points would be checked. We could also use the trick of the smallest enclosing cap to avoid checking the points when we are over an ocean far from land.

This would be both much more accurate than the current method and also much faster.
The more I think of it, the more I think we should even ditch the current implementation as its design choice does not really make sense anymore. It is a pity that I did it this way, and even embarrassing since I designed and developed Rugged a couple years before I designed and developed FootprintOverlapDetector :flushed:

So I suggest that you open a bug report asking to revert the design choice for the existing FootprintOverlapDetector.

Do you think you could do the reimplementation by yourself and contribute it? Beware it probably needs experience with the way Orekit/Hipparchus handle spherical geometry.

1 Like

First of all thank you for your answer,

  • good catch for the two unrelated issues, I’ve changed that on my side
  • I will open a bug report to revert FootprintOverlapDetector
  • I really don’t have much experience with it, I’m spending some time to see how Rugged works and how the FootprintOverlapDetector does but I think I’m reaching the knowledge’s limit here. So maybe at some point you could expect that I’ll contribute to it but probably right now

Following the discussion we had, I have some remaining questions:

  • the sampling step given in the FootprintOverlapDetector, does it correspond to the minimum distance between 2 points?
  • I’ve tried with new CircularFieldOfView(Vector3D.PLUS_K, Math.toRadians(30), 0) and a sampling step of 10_000 but I think have changing results whether I select one time or another. Is it still the same problem?

How much do the results change?
Are these results at least realistic or completely random?

I forgot to answer this, sorry.

The sampling step is the size of the side of square cells forming a grid. The cells are built staring from a seed point guaranteed to be strictly within the region of interest, but it could be almost anywhere in the region. From this seed point, the grid is built by successive expansion along neighbors, until the full region of interest is covered. There are some complex logic to handle zones that are not path connected and zones that contain holes, but basically it is an expansion starting from a single point. As the sphere and the ellipsoid are not developable surfaces, when building the grid some deformation occurs. The more grid points are far from the seed point, the more important the deformation is. This is another reason why using big zone of interest with tesselation (which is what current FootprintOverlapDetector does) does not work well.

Sorry for the delay with the answer.
So it’s jumping from approximately 10pm to 11pm from one run to another if I change the data range, so it feels random. It also detects land where it didn’t with the other start time.
Thanks for the explanation of the second point.

I’ve tried to focus on a small area and a big field of view and I still have incoherent result.
Is there a reason why this doesn’t work as planned?
testCase.zip (22.3 KB)