Autonomous driving simulation system plays a crucial role in enhancing
self-driving data and simulating complex and rare traffic scenarios, ensuring
navigation safety. However, traditional simulation systems, which often heavily
rely on manual modeling and 2D image editing, struggled with scaling to
extensive scenes and generating realistic simulation data. In this study, we
present S-NeRF++, an innovative autonomous driving simulation system based on
neural reconstruction. Trained on widely-used self-driving datasets such as
nuScenes and Waymo, S-NeRF++ can generate a large number of realistic street
scenes and foreground objects with high rendering quality as well as offering
considerable flexibility in manipulation and simulation. Specifically, S-NeRF++
is an enhanced neural radiance field for synthesizing large-scale scenes and
moving vehicles, with improved scene parameterization and camera pose learning.
The system effectively utilizes noisy and sparse LiDAR data to refine training
and address depth outliers, ensuring high quality reconstruction and novel-view
rendering. It also provides a diverse foreground asset bank through
reconstructing and generating different foreground vehicles to support
comprehensive scenario creation. Moreover, we have developed an advanced
foreground-background fusion pipeline that skillfully integrates illumination
and shadow effects, further enhancing the realism of our simulations. With the
high-quality simulated data provided by our S-NeRF++, we found the perception
methods enjoy performance boost on several autonomous driving downstream tasks,
which further demonstrate the effectiveness of our proposed simulator