Skip to main content
Digital TwinsSensor SimulationSimulation

Real-Time Neural Rasterization for Large Scenes

By September 28, 2023October 5th, 2023No Comments

Real-Time Neural Rasterization for Large Scenes

Jeffrey Yunfan Liu †, Yun Chen*, Ze Yang*, Jingkang Wang, Sivabalan Manivasagam, Raquel Urtasun
† denotes work done while an intern at Waabi, * denotes equal contribution

Abstract

We propose a new method for realistic real-time novel-view synthesis (NVS) of large scenes. Existing neural rendering methods generate realistic results, but primarily work for small scale scenes (<50 m2) and have difficulty at large scale (>10000 m2). Traditional graphics-based rasterization rendering is fast for large scenes but lacks realism and requires expensive manually created assets. Our approach combines the best of both worlds by taking a moderate-quality scaffold mesh as input and learning a neural texture field and shader to model view-dependant effects to enhance realism, while still using the standard graphics pipeline for real-time rendering. Our method outperforms existing neural rendering methods, providing at least 30× faster rendering with comparable or better realism for large self-driving and drone scenes. Our work is the first to enable real-time rendering of large real-world scenes.

Overview

Neural Scene Rasterization. Our method renders urban driving scenes (1920×1080) at high quality and >100 FPS by leveraging neural textures and fast rasterization. We reconstruct driving scenes in the San Francisco Bay Area and show the rendering at four streets on the map.

Video

Play with sound.

Motivation

Realistic and efficient camera simulation enables safe and scalable autonomy development. Realism enables us to develop autonomous systems in simulation with confidence that they will perform similarly in the real world, while efficiency enables fast and scalable development of the autonomy system on millions of scenarios. Achieving both speed and realism for camera simulation has been a long-standing challenge. Existing neural rendering methods have demonstrated impressive results. However, they struggle to achieve real-time efficiency, particularly in large scenes. On the other hand, traditional rasterization rendering is fast for large scenes but lacks the realism required for self-driving simulation. NeuRas is a novel neural rasterization approach that combines rasterization-based graphics and neural rendering for realistic real-time rendering of large-scale scenes. It overcomes the aforementioned limitations by utilizing a scaffold mesh as input and incorporating a neural texture field and shading to model view-dependent effects. Compared to computationally expensive neural volume rendering, this approach enables high-speed rasterization, which scales especially well for large scenes.

Method

We first create the scene representation for rendering. To start with, our method takes an moderate quality mesh scaffold as input, which can be generated from MVS or Neural 3D reconstruction. We then unfold the mesh and obtain the UV mappings for each of its vertices. Based on the generated UV mapping, we initialize a learnable UV feature map. For far-away regions like sky and distant buildings, we model them with multiple neural skyboxes to enable rendering of full scenes. Similarly, we use neural feature maps to represent these boxes’s texture.

Given the scene representation, we first rasterize the foreground mesh and neural skyboxes to target view, producing a set of image feature buffers. The feature buffers are then processed with MLPs to produce a set of rendering layers, which are composited to synthesize the final RGB image. The MLP and neural features are optimized during training. For rendering, the MLP is baked as a shader in existing rasterization engines for real time rendering.

To encourage the sharing of latent features in visually similar regions such as roads and sky, we apply vector quantization (VQ) to regularize the neural texture maps.

Real Time Rendering on Large Scale Scene

Compared to existing novel view synthesis approaches, NeuRas produces achieves the best trade-off between speed and realism. In particular, our method can achieve comparable performance as the best NeRF-based methods while being at least 30 times faster (>100 FPS).

NeuRas can render large-scale urban driving scenes with a high degree of realism in real time. These urban driving scenes typically span a distance of over 150 meters in camera movement.

Additionally, we demonstrate rendering large-scenes interactively. To the best of our knowledge, our method is the first of its kind that is capable of realistically rendering large scenes at a resolution of 1920×1080 in real-time.

Real Time Rendering on Drone Scene

NeuRas produces competitive realism results and achieves real-time rendering (>400 FPS) on drone scenes, allowing for interactive visualization.

Speedup NeRF Rendering

Our method can speed up popular NeRF approaches by extracting the meshes from nerf representations and learning neural texture for them. We achieve 30x speedup without significant drop in visual realism

BibTeX

@inproceedings{liu2023neural,
  title     = {Neural Scene Rasterization for Large Scene Rendering in Real Time},
  author    = {Jeffrey Yunfan Liu and Yun Chen and Ze Yang and Jingkang Wang and Sivabalan Manivasagam and Raquel Urtasun},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year      = {2023},
}