IMAGE SIMULATION FOR SPACE APPLICATIONS WITH THE SURRENDER

SOFTWARE

Jérémy Lebreton, Roland Brochard,

Matthieu Baudry, Grégory Jonniaux, Adrien Hadj Salah, Keyvan Kanani, Matthieu Le Goff,

Aurore Masson, Nicolas Ollagnier, Paolo Panicucci, Amsha Proag, Cyril Robin

Airbus Defence & Space, 31 rue des Cosmonautes, 31402 Toulouse

surrender.software@airbus.com

https://www.airbus.com/space/space-exploration/SurRenderSoftware.html

ABSTRACT

Image Processing algorithms for vision-based navigation require reliable image simulation capacities. In this paper we explain why traditional rendering engines may present limitations that are potentially critical for space applications. We introduce Airbus SurRender software v7 and provide details on features that make it a very powerful space image simulator. We show how SurRender is at the heart of the development processes of our computer vision solutions and we provide a series of illustrations of rendered images for various use cases ranging from Moon and Solar System exploration, to in orbit rendezvous and planetary robotics.

This paper was presented at the 11th International ESA Conference on Guidance, Navigation & Control Systems, 22 - 25 June 2021.

1 INTRODUCTION

The simulation of space scenes presents specific challenges, which are typically not handled by general purpose image simulators. Vision-based navigation solutions require training and validation datasets that are as close as possible to real images. Our team and partners develop computer vision algorithms for space exploration (Mars, Jupiter, asteroids, the Moon), and for in-orbit operations (rendezvous, robotic arms, space debris removal). There is a new wave of missions targeting cislunar orbit or the Moon surface. Of course "real images" are rarely available before the mission. Ground-based test facilities such as robotic test benches embarking mock-ups or experiences with scaled mission analogues (mars terrain analogue, drones flights, etc.) are useful, yet they are limited. For example it is very difficult to capture the scale of space scenes in a room-sized facility (such as a small objects illuminated by an extended light source). Also limited numbers of images are available from previous missions or from lab experiments, when thousands are needed to represent the variety of possible configurations that an algorithm will encounter. Another decisive asset of computer simulation is that the ground-truth is perfectly known, whereas real-life experiments are prone to errors and biases, which are hard to estimate or lack accuracy.

Some of the effects visible in space images are not of particular importance for traditional image simulators. For example, for far-range rendezvous, very low SNR targets (SNR ~ 1) must be simulated with high radiometric fidelity. Space-qualified cameras often have unusual optical distortions and achromatism, which also vary with camera aging and temperature, and the

J. Lebreton - Image simulation for space applications with the SurRender software

1

geometrical performance relies on properly modelling them. The Point Spread Function (PSF) and associated effects (resolution, blooming) are fundamental parameters for image quality and they need to be simulated physically. Defocus is often encountered and shall be well simulated. In this paper, we provide a quick listing of available rendering engines and discuss their limitations for space computer vision applications. We show how Airbus SurRender software attempts to go beyond these limitations. It is at the heart of the development process for many image processing solutions mostly in VBN. We use it from early prototyping, to extensive performance test campaigns and hardware-in-the loop experiments. Various API (Python, Matlab, Simulink, C++, etc.) are available to interface SurRender with different simulations environments (GNC environment simulator, optical stimulators, etc.). In the context of a growing need for autonomy and artificial intelligence in space, our team is pursuing a constant effort in the development of the SurRender software and its diffusion. In this paper we provide a general presentation of the software and its performances that must be read in complement to the details already introduced before [1]. We show a series of recent use cases simulated for our projects ranging for Lunar and Solar System exploration to in-orbit rendezvous with artificial objects and planetary robotics.

2 SPECIFICITIES OF IMAGE SIMULATION FOR SPACE APPLICATION

2.1 Requirements

In sectors developing computer vision solutions such as big tech and automotive industries, algorithms are often trained on real data because it is relatively easy to assemble massive datasets. Doing so is not an option for space applications because acquiring representative images before an actual mission is difficult if not impossible. Simulated images are needed to prototype, implement and validate Image Processing (IP) algorithms in preparation for space exploration missions. Several simulation engines have been developed and used worldwide, such as PANGU [2], VEROSIM, Blender, OSGEarth, Unreal Engine or SISPO [3]. At Airbus we developed the SurRender software with a list of desirable features in mind. The objective was to cover mission development cycles from preliminary analysis and sizing to advanced design, development and validation phases. Key requirements include:

  • Interfaces with data formats such as NASA PDS: Digital Elevation Models (DEM): .img,
    albedo maps with many standard image formats: .png, .tiff, .jpeg2000, etc. and 3D meshes:
    .obj (remark: computer graphics formats are needed, not CAD models.)
  • Raytracing shall be available for high fidelity simulations.
  • Real-timerendering shall be achievable for integration in closed-loop simulation environments.
  • Computation must be performed in double precision (float64) and image sampling must be

optimized to manage needed dynamic range (from millions of km to contact).

- The simulator shall offer high flexibility to modify models for sensors, materials, etc.

- The simulator shall optimize memory management to allow large datasets.

  • Images shall be validated geometrically and radiometrically.

J. Lebreton - Image simulation for space applications with the SurRender software

2

2.2 Why is SurRender performance unique?

SurRender uses standard computer graphics concepts (scene graphs, bounding boxes, shaders, etc.) but it is based on a proprietary implementation of many functions. There are two concurrent pipelines: a (tuned) OpenGL pipeline for real time applications and an original raytracing pipeline for IP development and performance assessment. Most rendering engines (Unreal Engine, OSGEarth, etc.) use OpenGL, a 3D graphics standard widely used in the video game or animation industry that benefits from hardware-accelerated rendering (GPU). It has some drawbacks. For instance reference [4] showed the limits of the rasterization techniques to simulate an instrument PSF (Point Spread Function). Noises and sensor models can only be implemented as a postprocessing and have limited representativeness. OpenGL implements the principle of far-plane /near-plane (background / foreground) which is unphysical and may yield numerical precision problems. In contrast SurRender splits the scene in an optimal number of layers even in OpenGL. As we will see double precision is essential. It is only locally implemented in OpenGL (eg. for quaternions) thus OpenGL simulations are not always numerically reliable. Intrinsically it does not allow going beyond simple projection models (pinhole): this is a limitation of rasterization which requires the projection of triangles to be triangles.

With the increase of computing performance, general purpose rendering engines are starting to implement raytracing techniques on GPU. However using raytracing does not necessarily mean having a physical representativeness. Tricks are used to make the image look visually appealing, for instance images may be subsampled for rendering before being oversampled with a neural network. Blender offers interesting raytracing capabilities; in particular the Cycles engine is able to simulate simple camera effects. SISPO is based on Blender Cycles, it offers specialized features targeting scientific space applications and has demonstrated good performances. Blender EVE engine literally "renders what the eye can see"; it is designed to trick the human eye. A strong limitation is RAM management. For Blender for instance, elevation models are converted to 3D meshes which quickly saturate RAM. In contrast SurRender uses an in-house format for DEM which are stored as conemaps (representing local tangents) and heightmaps (relative elevation). The data are initialized in mass memory using memory mapping and only the needed details are loaded in RAM. Furthermore this alleviates limits on the size of datasets: for instance the entire Moon can be covered in a single dataset without precision loss. In general only part of a meshed object is loaded in RAM (in video games this is the classical situation where objects appear successively with some lag), and this is not compatible with real-time for system validation. SurRender uses its internal representation for the graph scene that guarantees correct rendering even in OpenGL.

A (backward) raytracer does not only sample the detector, it samples 3D space. The scene is sampled with rays and a large enough number of rays must be cast from the pixel plane to obtain sufficient (numerical) signal-to-noise ratio (SNR). The sparsity of scenes and large scale ranges call for specialized methods. Without special optimization a raytracer would be highly inefficient as it would dedicate most of the resources to sample empty space. In raytracing if an object is not targeted explicitly it is unlikely it will be sampled. In a worst case scenario, the renderer may never intersect a distant object because it represents a very small solid angle. SurRender implements its own raytracer on CPU in full double precision. It is designed to sample space where it matters. First the rays target the bounding sphere of objects (preferential sampling). Second, the rays are used

J. Lebreton - Image simulation for space applications with the SurRender software

3

efficiently: the renderer targets in priority subparts of the scene responsible for the highest variance (importance sampling). There is more weight in regions with more signal. In particular, rather than uniform sampling, the raytracer uses the density function of the PSF with optimal statistical estimators. This way the physics of the light rays (optics diffraction) is intrinsically simulated rather than relying on a post-processing. A dichotomy is performed on the PSF such that these principles are applied at the subpixel level. They guarantee that a very high image quality is achieved with less rays. SurRender intrinsic use of PSF models for ray sampling (as opposed to post-processing) guarantees radiometric and geometric accuracy at the subpixel level.

Thanks to these optimizations, we can render complex scenes in raytracing in a sizeable timeframe. Examples include for instance secondary illumination from a planet on a spacecraft, continuous simulation of far away objects with constant level of noise from "infinity" to contact: a distant object may be unresolved but it radiometric budget is still a function of its apparent size. SurRender also has specialized routines for the rendering of stars which is systematically overlooked by other engines. Something that is very important for instance for Lunar missions, is that SurRender renders shadows physically, even in OpenGL for which it does not rely on the standard shadow mapping technique when rendering elevation models (or other kind of analytical models). Soft shadows account for direct illumination from the Sun - which is modelled as an extended source -, secondary illumination from other bodies (Earth, etc.) and from the local terrain, and self-reflections from the spacecraft itself.

Numerical precision is a key requirement for space applications because the dynamic range between celestial distances (>>107 m) and details of the target objects (satellite, local height, ~mm precision) is not compatible with the dynamic range of 32 bit floats (~109). Furthermore in some cases it is necessary to render each individual pixel along a pixel row sequentially in which case the simulation needs to be done in the time domain. For instance, regarding rolling shutter, push-broom or LiDAR sensors implementations, one must take into account the photons optical path and the camera relative motion during target acquisition. Other optimizations exist at lower level. For instance SurRender has its own implementation of all mathematical functions, which warrants a good tradeoff between speed and precision, and control over the compiler between different platforms.

Finally it is essential for a simulator to be flexible. It means to have a modular architecture that is compatible with numerous new models inputs. In SurRender all models can be tuned using SuMoL (SurRender Modelling Language) and dedicated interface. Some examples of original models particularly useful for space applications include relevant material surface properties (BRDF from Hapke, Oren Nayar and more), analytical shapes (spheroids, etc.), pointing error models, variable PSF models, various projection models, etc. SuMoL models are compiled "on the fly" by the engine. This makes it possible to do sensitivity analysis by varying all parameters from the simulation. This is a precious asset to do IP performance assessment for advanced mission phases.

J. Lebreton - Image simulation for space applications with the SurRender software

4

2.3 SurRender 7: a major release for the software 10th anniversary

For its 10th anniversary, the SurRender team has upgraded the software to an industrial level. The software has undergone a complete refactoring and it now offers faster OpenGL and raytracing engines. Improvements include optimizations for new use cases such as rover applications, new functions for loading and positioning lists of 3D meshes, LiDAR support, direct rendering to HDMI ports, a lightweight C interface, better statistics estimators and cloud computing features. An industrialized CICD process has been implemented based on Airbus best practices for satellite ground segments and more tests have complemented SurRender validation reports. Following up on feedbacks from SurRender growing user list, the user experience has been improved. The API and the server log became more user-friendly and clear. Additional tools such as those used to preprocess PDS datasets have been upgraded. A graphical demonstration interface will be released with SurRender 7 to simplify the discovery of the tool by new users. SurRender software is a professional software and licences can be granted on the basis of commercial or academical licences.

3 RENDERING OF PLANETARY SURFACES

3.1 The entire Moon in a single dataset

Brochard et al. [1] first presented a simulation of the Moon used for the development and validation of IP algorithms for planetary descent and landing. It is based on publicly available data: a global Digital Elevation Model from Lunar Reconnaissance Orbiter / Lunar Orbiter Laser Altimeter (LRO

  • LOLA) at 118m resolution (GSD: Ground Sampled Distance), and an albedo map from JAXA SELENE / Kaguya Multiband Imager at 237 m [5,6]. Dataset of better resolution exist, although not at a global scale. For instance LRO / LOLA DEM are available with a GSD of 59 m but only at latitude ranging from -60° and +60° so the dataset is not adequate for instance for South Pole missions. LRO offers imaging down to 1 m resolution locally. Our simulator has the benefit of covering the entire Moon with a single set of data (38 GB in compact format). RAM is managed very efficiently: the resolution is adapted to perform continuous simulations from tens of thousands of km to touchdown, anywhere on the surface. SurRender can use datasets up to 256 TB. Raytracing and pathtracing render realistic shadows accounting for occlusions, secondary illumination and Sun solid angle. The terrain optical properties are as good as reference models allow. For example the community standard for regolith surfaces is the Hapke BRDF (Bidirectional Reflectance Distribution Function) which captures the zero-phase situation when a pixel LoS is aligned with the Sun direction (opposition surge), drastically reducing contrast. Initial simulations were made with a sampling of 128 rays/pixel for high quality rendering. We can trade -off image quality for computing performance: with optimized numerical parameters, the simulation runs at 5 Hz in raytracing on a modern workstation with quality level adequate for real time campaigns (SurRender efficient raytracer yield good SNR with few rays/pixel).

In Figure 1, we show a validation test that was carried out independently from our team to verify the geospatial correctness of the Moon images. Our dataset was reproduced and images were compared with the Lunar Crater Database which provides a census of 1.3 millions craters [7] (the accuracy and completeness of this reference database is on a best-effort basis). Visual comparison shows that the image is correctly georeferenced and projected - here with a pinhole model and 70°

J. Lebreton - Image simulation for space applications with the SurRender software

5

This is an excerpt of the original content. To continue reading it, access the original document here.

Attachments

  • Original document
  • Permalink

Disclaimer

Airbus SE published this content on 30 July 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 31 July 2021 03:11:02 UTC.