Skip to main content
Digital TwinsSensor SimulationSimulation

NeuSim: Reconstructing Objects in-the-wild for Realistic Sensor Simulation

By June 1, 2023July 25th, 2023No Comments

Reconstructing Objects in-the-wild for Realistic Sensor Simulation

Ze Yang,  Siva Manivasagam,  Yun Chen,  Jingkang Wang,  Rui Hu,  Raquel Urtasun

Abstract

Reconstructing objects from real world data and rendering them at novel views is critical to bringing realism, diversity and scale to simulation for robotics training and testing. In this work, we present NeuSim, a novel approach that estimates accurate geometry and realistic appearance from sparse in-the-wild data captured at distance and at limited viewpoints. Towards this goal, we represent the object surface as a neural signed distance function and leverage both LiDAR and camera sensor data to reconstruct smooth and accurate geometry and normals. We model the object appearance with a robust physics-inspired reflectance representation effective for in-the-wild data. Our experiments show that NeuSim has strong view synthesis performance on challenging scenarios with sparse training views. Furthermore, we showcase composing NeuSim assets into a virtual world and generating realistic multi-sensor data for evaluating self-driving perception models.

Overview

Given a camera video and LiDAR sweeps as input, our model reconstructs accurate geometry and surface properties, which can be used to synthesize realistic appearance under novel viewpoints using our physics-based radiance module, enabling realistic sensor simulation for self-driving.

Video

Method

NeuSim is composed of a structured neural surface representation and a physics-based reflectance model. This decomposed representation enables generalization to new views from sparse in-the-wild viewpoints. Given a continuous 3D location, NeuSim outputs the signed distance value of the point to object surface, the albedo and specular reflection. The signed distance value is used to derive the surface normal, which is then used to shade the diffuse and specular components to obtain the final RGB color. We also render the LiDAR depth and intensity, as well as object mask from the learned representation.

Geometry Reconstruction

We can reconstruct 360° full shape from partial observations (left video). For each example, we use the red bounding box to annotate the vehicle of interest on the left. We also show the reconstructed vehicle mesh on the right.

360° Free View Rendering

We can reconstruct 360° full shape by applying structural priors such as symmetry, allowing photorealistic rendering from arbitrary viewpoints.

Novel View Synthesis

When testing on novel viewpoints, our approach generalizes better to large viewpoint change compared to other methods, demonstrating the value of our physics-based reflectance model. Our method also captures more fine-grained details and accurate colors.

Results on Non-vehicle Objects

Our method also works on non-vehicle objects, such as a moped with tiny handlebars, or a thin wooden scaffold construction blended in with background.

Realistic Sensor Simulation

The reconstructed assets can be inserted into existing scenes for generating new scenarios for self-driving simulation. Because our assets are consistent across sensors, we can generate realistically the LiDAR point clouds (on top) and the camera images (on bottom) for the modified scene. The left video demonstrates the manipulation of the inserted actor, while the right video showcases the actor aggressively merging into our lane.

BibTeX

@inproceedings{yang2023reconstructing,
  title     = {Reconstructing Objects in-the-wild for Realistic Sensor Simulation},
  author    = {Yang, Ze and Manivasagam, Sivabalan and Chen, Yun and Wang, Jingkang and Hu, Rui and Urtasun, Raquel},
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
  year      = {2023},
}