40 Focus | Simulation and testing NeRF learns the geometry, objects and angles of a particular scene, and then it renders photorealistic 3D views from novel viewpoints, automatically generating synthetic data to fill in any gaps. How to combine this with the simulated assets is not so easy to solve. One idea is the neural render API to create a combined 3D environment and spawn variations, along with data such as distance to the objects to give the opportunity to modify the situation. Another algorithm called General Gaussian Splatting Renderer combines speed, flexibility and improved visual fidelity. Rather than using polygons to draw an image this uses ellipses that can be varied in size and opacity to improve the quality of the virtual environment. However, the original algorithm’s approach to the Gaussian Splatting projection introduces several limitations preventing sensor simulation. This limitation originates from the approximation error, which can be large when simulating wide-angle cameras. There are other challenges with these neural rendering approaches, such as inserting out-of-distribution (i.e., previously unseen) objects into the 3D environment, and artifacts or blurs may affect the appearance of dynamic objects. Geometric inconsistencies can also arise, mostly with depth prediction. Implementing this neural rendering early on in the sensor simulation pipeline provides a versatile extension by keeping as many of the original features as possible. This improves the integration of the camera sensors, and other raytrace-based simulations such as Lidar and radar. An inability to support other sensor modalities is one of the biggest issues with most neural-rendering solutions applied in autonomous driving simulations. Rebuilding the algorithm from scratch to work with existing rendering pipelines allows the tools to assemble images from various virtual cameras that may be distorted. This enables the digital twin to simulate high-end sensor setups with multiple cameras, even with HiL. Because of the generality of the algorithm, you can get the same results consistently from sensor models that use raytracing, such as Lidar and radar. This improves runtime performance as the renderer remains fast enough to work at a real-time frame rate of 30 frame/s and can also be used in HiL systems. Developers can move around freely with the camera and use different positions or sensor setups in the simulated scenario without unpredictable artifacts or glitches. It lets developers get up close and personal with intricate details for all kinds of objects and surfaces. The number of applications can be increased even further, as the algorithm can be used in physical simulations or even for surface reconstruction, including potholes in the road. While 3D environments can be built manually by 3D artists, they have limitations in scalability and addressing the Sim2Real domain gap. Gaussian rendering can also be significantly faster. For example, creating a virtual area of San Francisco could take 3D artists a month and a half, while it’s just a few days with neural reconstruction. That’s the big advantage: a 20 times faster production rate. The sensor models used in a virtual environment need to be flexible, October/November 2024 | Uncrewed Systems Technology Neural rendering boosts the performance of simulation tools such as AIsim (Image courtesy of AIMotive) A simulation environment of Los Angeles for testing (Image courtesy of AB Dynamics)
RkJQdWJsaXNoZXIy MjI2Mzk4