Top 300 Asteroid Ephemerides Library

As I’m working on long term asteroid propagations more I need to have third body perturbations for the the larger asteroids. Baer and Chesley came up with a set of table of states they call BC-405/BC-430. Along with the definition of a sun and solar system barycenter locked Frame I’m going to be coding up a classes for reading a SQLite version of the asteroid data based off of the BC data. The total data set is on the order of 100 MB. This seems to be a very edge case for the library so I’m thinking perhaps this would be better as an augmented library like Rugged is. I like the idea of using the DataContext methodology for registration and simulating the JPL DE loader implementation but against this data set. The factoring I’m thinking of is:

  • Adding the JPL sun centered and solar system Barycenter Centered frames to Orekit proper
  • Creating an Asteroid 3rd Body library hosted on the Orekit Gitlab and posted to Maven repos like with Rugged. This could be another sub-project on Orekit but not in the main source tree.
  • Creating a light weight landing page for the library to help people find the data files etc.


Hi Hank,

It sounds like a great addition to the library!

Here are my thoughts:

If the only issue is the size of the data then it shouldn’t be a problem to add it to Orekit. The data is external and has its own forge. Maybe we should just add somewhere in the Javadoc that the data is too large and thus not in the downloadable, with a link to where to download it manually.

But maybe you’ll need some Maven dependency to read the BC data with SQLite. This would probably justify developing this feature in a different library.


Yeah if I wrote it with SQLite as the format I’d need a library for that. It’s a pretty small dependency though. If I end up putting it in an external library I’d probably write it in Kotlin instead of Java as well. The “standard” format isn’t really searchable and it is several files versus one large data file. Because it’s 300 objects with 300 years of data at 40 second step sizes I wanted something that was searchable and self descriptive. Maybe I could do a JSON text format instead of SQLite and then do searching with streams or something…Have to think about it…