Updates for High Fidelity Heliocentric Propagation

Ah ah, yes if you forget to setup checkstyle from the beginning that usually happens :wink:

Not sure. The first thing to do would be to checkout a new branch from your develop branch and call it “issue-1681”. Then push this branch on your fork and see if SonarQube can give you a delta of the new code with respect to the run on “develop” branch that you already have.
That is what we are interested in when looking at your merge request.
Then, I don’t know if you can change the source branch of your merge request. Maybe you will have to close the current one and open a new one from the new branch.
We’ll try to walk you through the process

1 Like

I have now checked out a new branch called issue-1681!

However, even after running the pipeline on this new branch, it is not added to the list of analyzed branches on my project on Orekit’s SonarQube. The master branch is listed there, though.

I guess the “Project’s Main Branch is not analyzed yet.” is due to my fork not having a “main” branch, but I have seen other projects have managed to get other branches added to the list of analyzed branches without having a main branch, such as this one.

I have tried to run the pipeline on other branches of my fork, including branches not created by me (e.g., release 12.0), but that also didn’t work.

I have noticed, however, that for all the Orekit forks currently analyzed in Orekit’s SonarQube, all of the branches are in status “Passed”. I guess this means they passed the pipeline run?
I am wondering if my issue-1681 branch is not being added to the list of analyzed branches because the verify part of the pipeline fails?

On the other hand, it seems that indeed it is not possible to change the source branch of the merge request, so I was thinking of indeed opening a new one after managing to get the new branch analyzed on SonarQube.

Thanks a lot again for your help in setting this up!

Hi @Rafa,

Thanks!

Yes if you have errors in the tests of the “verify” job, then the Sonar analysis is not launched…
You have to fix those tests if you want the branch to appear in Sonar.

Yes, sorry about that.

You’re very welcome @Rafa !

1 Like

Thanks a lot for the clarifications!

Regarding the errors in the “verify” job for the pipeline on my issue-1681 branch, it seems that some (all?) of the remaining errors are coming from test suites not related to my new added code, e.g., for OrekitExceptionTest.testBadMessage.

I guess I should not be worried about this? But in that case, it seems like I can’t deliver on the goal of getting a delta of the new code on SonarQube.

Sorry if I’m missing some obvious solution here!

Hi @Rafa,

I’ve checked out your branch and ran the tests locally.
Actually, the problem is that you added new exception messages, but didn’t update the translation files accordingly.
Indeed, when adding a new exception message, you have to add it to all localization files.
You don’t have to translate the message by yourself. But still, it has to appear.
The master file is the English one. You have to give the key of the message and repeat the English text here.
For the other languages, simply copy/paste the English messages you just added and replace the translated text with<MISSING TRANSLATION>.
If you have time, you can translate the messages in your mother tongue :wink:
See for example:

Last thing, you will have to increase the number of messages in test OrekitMessagesTest.testMessageNumber().

Sorry, it’s a bit of a hassle to do, but it’s necessary for now when adding new exceptions in Orekit.

Cheers,
Maxime

1 Like

Thank you so much for the detailed explanation! That was really helpful, and finally got the pipeline to run properly! I included the translations for the new exception messages.

I’ve now opened a new merge request with the correct branch as the source, closed the older one, and my issue-1681 branch is in the list of analyzed branches for my fork in SonarQube.

I also took the chance to make another merge request to add the missing Spanish translations, hopefully helps tackle this goal :slight_smile:

Best wishes,

Rafa

1 Like

Hi @Rafa,
I am having a look at the code you posted, but since I am not an expert programmer I think its best to ask you directly. I have opened an issue #1571 regarding the implementation of .bcp file from JPL to compute the moon pole with precision. What have you implemented so far is capable of reading such file and extracting the polynomials coefficitent that need to be used for the Moon pole definition ?

Thanks in advance and great work !
Davide

1 Like

I am precisely working on reproducing the trajectory of GRAIL-A with high fidelity! So this is something of great interest.

The file that you have linked is of type PCK. This is a filetype that I have in mind adding support to in the short term, and it should not be too hard, but not something that can be handled by the code in the merge request I made yet.

PCK specifications are defined in the framework of DAF files, for which I wrote code to support, since that is also required for SPK files. So it will just be a matter of formatting correctly the values read as a DAF file. Luckily, there is only 3 types of PCK (versus nearly 20 SPK), which makes things easier.

I have in mind working on these and other DAF/SPK related additions in the next coming weeks!

1 Like

Thanks, very clear I will try to do some test myself with the DAF parameters. I keep you posted if I get something.
In addition my benchmark will be done againts a parser that I have writted which tooks directly the quaterionion from the cspice output (not elegant but for low rotation rate as per the Moon Pole it quite accurate).

Keep it up!
Davide

1 Like

To be specific, I guess you could hack your way into it doing something like:

  • Read the file as a DAF
  • But this will be a rather meaningless set of numbers, except a very generic header
  • You will need to locate in this mess the coefficients. I think you will likely be handling a Type 3 PCK, which provides Chebyshev coefficients for an angle and their derivatives. I think this will likely be the case because text-based DE files provide lunar mantle libration angles and angular velocities, so I guess these will be equivalent to having the angles and derivatives.
  • You could then pass them onto a Chebyshev interpolator

I can’t promise anything, but I might have some working prototype next week of a proper implementation formatting DAF contents into a PCK kernel

1 Like

@DDega I have a first draft for handling PCK kernels on my issue-1571 branch. It is very early and I still need to add a test suite, have to refactor quite a few things from the DAF and SPK modules but from my quick tests, it seems to parse and evaluate properly the bcp file you shared, in case you want to give it a shot!

Thanks, I will have a look at it.

Hey @Rafa, hope you’re doing great!

Quick update on the PCK parser you kindly put together (sorry it took a while — my non-existent C skills slowed me down a bit when trying to build something to validate the PCK file :sweat_smile:).

A couple of things:

  • First off, the file looks like it’s actually a type 2 PCK — which was unexpected, but hey, it seems to work.
  • I wrote a test for the type 2 format (pasted below), and everything mostly lines up — so great job there :clap:

The only thing that’s a bit off is the start and end addresses in the segment. I expected them to be 641 and 1,280,644 bytes (in bits in the test), but they’re different in the parsed result. I’m not sure if that’s a real issue or just me misunderstanding something. Any idea what might be going on?

Next, I’m planning to finish up the bit that extracts the frame orientation from the polynomials. I’ll keep you posted once that’s done.

Cheers,
Davide

package org.orekit.files.daf.pck;

import static org.junit.jupiter.api.Assertions.assertEquals;

import java.util.ArrayList;
import java.util.List;

import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import org.orekit.data.DataSource;
import org.orekit.files.daf.pck.PCKSegmentDataTypes.PCKSegmentDataType2;
import org.orekit.files.daf.spk.SPKChebyshev;
import org.orekit.files.daf.spk.SPKSegmentDataTypes.SPKSegmentDataChebyshevPositionSingleRecord;
import org.orekit.time.AbsoluteDate;
import org.orekit.time.TimeScale;
import org.orekit.time.TimeScalesFactory;

public class PCKEvaluationTest {

    private final TimeScale tdb = TimeScalesFactory.getTDB();
    private final AbsoluteDate j2000InTDB = new AbsoluteDate(2000, 1, 1, 12, 0, 0, tdb);
    private final double tolerance = 1e-2;

    @Test
    public void testParsePCKType2() {
        // Load and parse the binary PCK file
        final String ex = "/daf/moon_pa_de440_200625.bpc";
        final DataSource source = new DataSource(ex, () -> getClass().getResourceAsStream(ex));
        final PCK parsedFile = new PCKParser().parse(source);

        // Check that the comment section ends with a specific timestamp
        System.out.println(parsedFile.getComments());
        Assertions.assertTrue(parsedFile.getComments().endsWith("2650 JAN 25 00:00:00.000\n"));

        final List<PCKSegment> segments = parsedFile.getSegments();
        // Expect exactly two segments in the file
        Assertions.assertEquals(2, segments.size());

        final PCKSegment firstSegment = segments.get(0);

        // Validate metadata from the first segment
        Assertions.assertEquals("1549-12-20T23:59:27.815889091506362198 TAI",
                firstSegment.getSegmentSummary().getInitialEpoch().toString());
        Assertions.assertEquals("de440.nio", firstSegment.getSegmentName());

        final PCKSegmentSummary firstSegmentSummary = firstSegment.getSegmentSummary();
        Assertions.assertEquals(2, firstSegmentSummary.getPCKTypeCode());
        Assertions.assertEquals("Chebyshev (angles only)", firstSegmentSummary.getPCKType());
        Assertions.assertEquals(31008, firstSegmentSummary.getTargetNAIFCode());
        Assertions.assertEquals(1, firstSegmentSummary.getFrameNAIFCode());
        Assertions.assertEquals(641*8-7, firstSegmentSummary.getInitialArrayAddress());
        Assertions.assertEquals(1280644*8+1, firstSegmentSummary.getFinalArrayAddress());

        // Verify time range of the segment
        final AbsoluteDate initialEpoch = firstSegmentSummary.getInitialEpoch();
        final double initialEpochET = initialEpoch.durationFrom(j2000InTDB);
        Assertions.assertEquals(-14200747200.0, initialEpochET, tolerance);

        final AbsoluteDate finalEpoch = firstSegmentSummary.getFinalEpoch();
        final double finalEpochET = finalEpoch.durationFrom(j2000InTDB);
        Assertions.assertEquals(13447252800.0, finalEpochET, tolerance);

        // Check number of data records
        final ArrayList<SPKSegmentDataChebyshevPositionSingleRecord> dataRecords =
                ((PCKSegmentDataType2) firstSegment.getSegmentData()).getDataRecords();
        Assertions.assertEquals(40000, dataRecords.size());

        // Verify the final data record
        final SPKSegmentDataChebyshevPositionSingleRecord lastRecord =
                dataRecords.get(dataRecords.size() - 1);
        final AbsoluteDate lastRecordFinalDate = lastRecord.getFinalDate();
        final double lastRecordFinalDateET = lastRecordFinalDate.durationFrom(j2000InTDB);
        Assertions.assertEquals(13447252800.0, lastRecordFinalDateET, tolerance);

        // Validate Chebyshev polynomial metadata
        SPKChebyshev lastRecordChebyshev = lastRecord.getPositionChebyshev();
        assertEquals(13446561600.0, lastRecordChebyshev.getInitialDate().durationFrom(j2000InTDB),
                tolerance, "Initial date mismatch");
        assertEquals(13446907200.0, lastRecordChebyshev.getMidpoint().durationFrom(j2000InTDB),
                tolerance, "Midpoint mismatch");
        assertEquals(10, lastRecordChebyshev.getInterpolationPoints(), "Interpolation points mismatch");
        assertEquals(345600.0, lastRecordChebyshev.getRadius(), tolerance, "Radius mismatch");
        assertEquals(1.0, lastRecordChebyshev.getScaleFactor(), tolerance, "Scale factor mismatch");

        // Check Chebyshev coefficients for RA
        final double[] ChebCoeffRA = lastRecordChebyshev.getXCoeffs();
        final double[] expectedChebichevCoeffRA = {
                -2.44226466836715977e-02, 3.34128441574723452e-04,
                9.63191168732138627e-05, -3.45913469999950336e-06,
                -3.64485999790243449e-06, -6.01443784629434927e-07,
                4.77195547433532325e-08, 1.12182197744218538e-08,
                -1.38409453070187985e-10, -4.70879573201374059e-12
        };

        for (int i = 0; i < expectedChebichevCoeffRA.length; i++) {
            Assertions.assertEquals(expectedChebichevCoeffRA[i], ChebCoeffRA[i], 1e-20);
        }

        // Check Chebyshev coefficients for DEC
        final double[] ChebCoeffDEC = lastRecordChebyshev.getYCoeffs();
        final double[] expectedChebichevCoeffDEC = {
                4.33353953157573968e-01, 9.84902562548158972e-05,
                -3.85336096648213384e-05, -9.18606199566445448e-06,
                -8.81890561078146453e-07, 2.00790889517800090e-07,
                4.17664889171795721e-08, -1.28238703118691811e-09,
                -3.32830630006246497e-10, 1.05359126361415703e-11
        };

        for (int i = 0; i < expectedChebichevCoeffDEC.length; i++) {
            Assertions.assertEquals(expectedChebichevCoeffDEC[i], ChebCoeffDEC[i], 1e-20);
        }

        // Check Chebyshev coefficients for W
        final double[] ChebCoeffW = lastRecordChebyshev.getZCoeffs();
        final double[] expectedChebichevCoeffW = {
                3.83558564678053226e+04, 9.19663049848119196e-01,
                -9.90929406934630184e-05, 1.40898711404538427e-06,
                3.40895871148744643e-06, 5.58736091026859832e-07,
                -4.29705937151806628e-08, -1.00535157712960103e-08,
                1.47689031933346115e-10, 3.15332388392443341e-12
        };

        for (int i = 0; i < expectedChebichevCoeffW.length; i++) {
            Assertions.assertEquals(expectedChebichevCoeffW[i], ChebCoeffW[i], 1e-20);
        }
    }
}

1 Like

thank you so much for sharing this! Ive been also working on this and just now pushed an update to the branch with the PCK reader with tests and code for types 2, 3 and 20. I realized about the type 3 situation, will post a couple interesting notes on that on the issue about PCK types!

Sorry I just saw you also asked about the unexpected start and end addresses.

Why would you expect the initial address to be 641? In DAF files (including PCK), we have the following structure of records (each record is 1024 bytes):

  • We always start with a single File Record
  • Then, a variable number of optional comment records (there might be none present)
  • Then one summaries record
  • This is immediately followed by one names record
  • Then a number of data records, variable, as many as required to store all the segments whose summaries were given in the previous summaries-names records pair
  • The set of (1 summary record - 1 names record - X data records) is repeated until we cover all the included segments

So this means in the minimum case of a file with no comment records, we would have 3 full records (file record + 1 summary record + 1 name record) before the beginning of the first data record. This would mean the earliest address available for array data would be the 3073th byte (note that SPICE describes addresses as double precision words, but I’m handling them as byte addresses currently).

Why did you expect the start address to be 641?

Thanks!

I used CSPICE with the dafus_c functions to get the starting memory address for the record, and it came out as 641. It’s possible I didn’t use it quite right, but honestly, the main thing is that the extracted data matches up. Also for the file that I was parsing (the MoonPA orientation) the pck reader in Java was gining as first adress the value 5121. (btw thanks for the clarification on the adresses, very informative and clear :slight_smile: ).

I’m planning to share some results on the propagation accuracy by the end of the week.

Cheers,
Davide

1 Like

I see!

The issue is that SPICE’s definition of memory addresses come in units of “double precision words”, with each of these being basically 8 bytes.

So the numbers CSPICE handles to give memory addresses are actually sets of 64 bits. We have to multiply by 8 to get bytes, so for 641 we get 5128, which considering that the entire set of 8 bytes would be included in that “double precision word”, gives us a first byte address for the first array element of 5121.

I posted some of my own notes about PCK files in the 1571 issue, in case you find them useful to handle the values obtained when evaluating the PCKs :slight_smile: happy to troubleshoot if you get any errors when evaluating the kernels

Thanks @Rafa, I had a look at it and it is now clear.
I will try to provide a reference test to compare the rotation between the ICRF (I think) and MoonPA reference frame with the one that I got using the spice.exe files.

Cheers :slight_smile:

@DDega I have now updated my implementation to provide a TimeStampedAngularCoordinates with rotation and rotation rates verified to match those returned by CSPICE, in case you want to have a look

Thanks, I will have a look at it shortly.