Hey Folks,

I have a field of view for a ground telescope defined by four vertices which make a quadrilateral whose edges may or may not be parallel. I’m thinking that the PolygonalFieldOfView would be the correct class to define the field of view, but I’m struggling to get it to work correctly. My guess is that I’m configuring something incorrectly, so I’m going to show what I’m doing and ask for some help in understanding whether or not I’m implementing something wrong.

First I’m creating a S2Point array using the az/el location of each vertex, where the final vertex is the same as the initial vertex:

S2Point[] pointArr = new S2Point[5];
pointArr[0] = SensorUtils.azElToS2Point(lowerLeftAzLimit, lowerLeftElLimit);
pointArr[1] = SensorUtils.azElToS2Point(upperLeftAzLimit, upperLeftElLimit);
pointArr[2] = SensorUtils.azElToS2Point(upperRightAzLimit, upperRightElLimit);
pointArr[3] = SensorUtils.azElToS2Point(lowerRightAzLimit, lowerRightElLimit);
pointArr[4] = SensorUtils.azElToS2Point(lowerLeftAzLimit, lowerLeftElLimit);
SphericalPolygonsSet sps = new SphericalPolygonsSet(1e-9,pointArr);
PolygonalFieldOfView polyFov = new PolygonalFieldOfView(sps, 1e-9);

where the azElToS2Point method is defined here:

public static S2Point azElToS2Point(double azDegs, double elDegs){
    double azRads = Math.toRadians(azDegs);
    double elRads = Math.toRadians(90.0 - elDegs);
    return new S2Point(azRads, elRads);

Then I add the FOV into a GroundFieldOfViewDetector like this, where the location is the TopocentricFrame based on the sensor GeodeticPoint:

GroundFieldOfViewDetector fovDet = new GroundFieldOfViewDetector(location, polyFov)
                    .withHandler((s, d, increasing) -> {
                        return Action.CONTINUE;

From here, I attach the detector to my propagator, and monitor for detections. It produces detections but the problem is that the detections it produces are not correct. It identifies windows where the actual pointing angles from sensor to satellite are well outside of the defined polygon vertices / edges (i.e. by 10-20 degrees or more). A pseudocode example with vertices defined as v = (azimuth, elevation) would be:

v1=(174.4, 43.3)   // lower left
v2=(172.9, 52.2)   // upper left
v3 =(192.3, 43.3)  // upper right
v4=(190.7, 43.2)   // lower right
v5=(174.4, 43.3)    // lower left

actual measurement = (221.5, 40.3)

The results of this run show the actual measurement to be in the field of view when it shouldn’t be according to the definition. So my question is, am I misunderstanding the polygonal field of view or not defining it correctly?

Any help you can provide would be appreciated,


Quick update: I set up a test to check and see if the polygon set was being configured correctly, and by using the checkPoint method, I was able to determine that the polygon set is being assembled correctly. That must mean that something else is causing the issue… still digging on this.

Are you sure your boundary is oriented correctly ?
It seems to me you are defining a field of view that is the full sphere minus the part that is interesting to you.

Boundaries on regions are defined such that the interior is on the left side and exterior is on the right side when you travel around the boundary.

Great comment! I didn’t realize the order was important, so once I fixed that I was able to get the SphericlPolygonsSet working as expected. Now I’m still not getting the correct result when plugging that into a GroundFieldOfViewDetector as a PolygonFieldOfView. I’m still getting no visibility windows, so I’m working on debugging that part now. Thank you so much for the suggestion though, it got me through to the next issue!