Hey Folks,
I have a field of view for a ground telescope defined by four vertices which make a quadrilateral whose edges may or may not be parallel. I’m thinking that the PolygonalFieldOfView would be the correct class to define the field of view, but I’m struggling to get it to work correctly. My guess is that I’m configuring something incorrectly, so I’m going to show what I’m doing and ask for some help in understanding whether or not I’m implementing something wrong.
First I’m creating a S2Point array using the az/el location of each vertex, where the final vertex is the same as the initial vertex:
S2Point[] pointArr = new S2Point[5];
pointArr[0] = SensorUtils.azElToS2Point(lowerLeftAzLimit, lowerLeftElLimit);
pointArr[1] = SensorUtils.azElToS2Point(upperLeftAzLimit, upperLeftElLimit);
pointArr[2] = SensorUtils.azElToS2Point(upperRightAzLimit, upperRightElLimit);
pointArr[3] = SensorUtils.azElToS2Point(lowerRightAzLimit, lowerRightElLimit);
pointArr[4] = SensorUtils.azElToS2Point(lowerLeftAzLimit, lowerLeftElLimit);
SphericalPolygonsSet sps = new SphericalPolygonsSet(1e-9,pointArr);
PolygonalFieldOfView polyFov = new PolygonalFieldOfView(sps, 1e-9);
where the azElToS2Point method is defined here:
public static S2Point azElToS2Point(double azDegs, double elDegs){
double azRads = Math.toRadians(azDegs);
double elRads = Math.toRadians(90.0 - elDegs);
return new S2Point(azRads, elRads);
}
Then I add the FOV into a GroundFieldOfViewDetector like this, where the location is the TopocentricFrame based on the sensor GeodeticPoint:
GroundFieldOfViewDetector fovDet = new GroundFieldOfViewDetector(location, polyFov)
.withHandler((s, d, increasing) -> {
return Action.CONTINUE;
});
From here, I attach the detector to my propagator, and monitor for detections. It produces detections but the problem is that the detections it produces are not correct. It identifies windows where the actual pointing angles from sensor to satellite are well outside of the defined polygon vertices / edges (i.e. by 10-20 degrees or more). A pseudocode example with vertices defined as v = (azimuth, elevation) would be:
v1=(174.4, 43.3) // lower left
v2=(172.9, 52.2) // upper left
v3 =(192.3, 43.3) // upper right
v4=(190.7, 43.2) // lower right
v5=(174.4, 43.3) // lower left
actual measurement = (221.5, 40.3)
The results of this run show the actual measurement to be in the field of view when it shouldn’t be according to the definition. So my question is, am I misunderstanding the polygonal field of view or not defining it correctly?
Any help you can provide would be appreciated,
Nick