Skip to main content
ResearchSensor Simulation

SaLF: Sparse Local Fields for Multi-Sensor Rendering in Real-Time

By April 24, 2025July 23rd, 2025No Comments

SaLF: Sparse Local Fields for Multi-Sensor Rendering in Real-Time

Yun Chen*, Matthew Haines*十, Jingkang Wang, Krzysztof Baron-Lis, Sivabalan Manivasagam, Ze Yang, Raquel Urtasun

*=equal contribution, =work done while at waabi

Abstract

SaLF (Sparse Local Fields) is a novel volumetric representation for autonomous driving sensor simulation that supports both rasterization and ray-tracing. It represents scenes as sparse voxels containing local implicit fields, enabling efficient rendering of cameras (>30 FPS) and LiDARs (>400 FPS) with fast training times (<30 min) in RTX 3090. Unlike existing neural rendering methods, SaLF uniquely combines advanced sensor modeling capabilities with superior computational efficiency while maintaining high visual fidelity across diverse environmental conditions and complex driving scenarios.

Motivation

Autonomous driving development requires high-fidelity sensor simulation, but existing approaches trade-off between computational efficiency and sensor modeling capabilities. Current methods either excel at camera simulation or LiDAR rendering, but not both, while advanced sensor effects typically come at the cost of real-time performance. SaLF addresses this gap by introducing a unified volumetric representation that enables real-time rendering of multiple sensor types while supporting sophisticated sensor modeling capabilities, ultimately accelerating autonomous driving development through more realistic and efficient simulation.

Method

SaLF represents scenes as a sparse grid of voxel primitives where each voxel contains a local implicit field mapping 3D coordinates to density and color. It uses adaptive pruning and densification to efficiently handle large scenes while preserving fine details. Each voxel has geometric parameters (position, scale, rotation) and learnable parameters (geometry field, color field, spherical harmonics).

SaLF supports dual rendering:

  1. Ray-casting with octree acceleration for complex sensors and effects like refraction and shadows
  2. Tile-based splatting for efficient pinhole camera rendering

This unified approach allows choosing the optimal rendering method based on sensor type while maintaining consistent visual quality.

Results

Camera Simulation

SaLF achieves high photorealism on complex urban driving scenes, reconstructing them rapidly (under 30 minutes). The resulting representation can be rendered in real-time (>30 FPS) from novel viewpoints, handling diverse backgrounds, traffic participants, and lighting conditions.

LiDAR Simulation

In addition to camera, SaLF also efficiently simulates LiDAR sensors, generating realistic point clouds at high speeds (>400 FPS).

Advanced Camera Modeling: Panorama

SaLF’s flexible representation enables simulation of diverse camera models beyond standard pinhole cameras. Here we demonstrate rendering 360° panoramic views, crucial for simulating surround-view systems and providing complete environmental awareness.

Rolling Shutter Simulation

Accurate sensor simulation requires modeling physical effects like rolling shutter. SaLF captures the temporal distortion artifacts common in SDV sensors, especially visible in dynamic scenes with relative motion between the sensor and objects, essential for realistic simulation in high-speed scenarios.

Ray Tracing Capabilities

Leveraging its ray-casting rendering path, SaLF can simulate complex light transport phenomena. This includes effects like refraction, reflections, and shadows, enhancing realism.

Conclusion

SaLF offers a fast, realistic, and versatile way to simulate self-driving sensors like cameras and LiDAR. By uniquely supporting both rasterization and ray-tracing in one sparse voxel format, it achieves real-time speeds, handles complex sensors and effects, and trains much faster, enabling more scalable and comprehensive sensor simulation in self-driving.

BibTeX

@article{ 
chen2025salf,
title={SaLF: Sparse Local Fields for Multi-Sensor Rendering in Real-Time},
author={Chen, Yun and Haines, Matthew and Wang, Jingkang and Baron-Lis, Krzysztof and Manivasagam, Sivabalan and Yang, Ze and Urtasun, Raquel},
booktitle={Arxiv},
year={2025},
}