Skip to main content

Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop Simulation

By October 26, 2023November 10th, 2023No Comments

Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop Simulation

Jay Sarva †, Jingkang Wang, James Tu, Yuwen Xiong, Sivabalan Manivasagam, Raquel Urtasun
† denotes work done while an intern at Waabi
Conference: CoRL 2023


Self-driving vehicles (SDVs) must be rigorously tested on a wide range of scenarios to ensure safe deployment. The industry typically relies on closed-loop simulation to evaluate how the SDV interacts on a corpus of synthetic and real scenarios and to verify good performance. However, they primarily only test the motion planning module of the system, and only consider behavior variations. It is key to evaluate the full autonomy system in closed-loop, and to understand how variations in sensor data based on scene appearance, such as the shape of actors, affect system performance. In this paper, we propose a framework, Adv3D, that takes real world scenarios and performs closed-loop sensor simulation to evaluate autonomy performance, and finds vehicle shapes that make the scenario more challenging, resulting in autonomy failures and uncomfortable SDV maneuvers. Unlike prior work that add contrived adversarial shapes to vehicle roof-tops or roadside to harm perception performance, we optimize a low-dimensional shape representation to modify the vehicle shape itself in a realistic manner to degrade full autonomy performance (e.g., perception, prediction, motion planning). Moreover, we find that the shape variations found with Adv3D optimized in closed-loop are much more effective than open-loop, demonstrating the importance of finding and testing scene appearance variations that affect full autonomy performance.


We propose a novel adversarial attack framework, Adv3D, that searches over the worst possible actor shapes for real-world scenarios through high-fidelity closed-loop simulation, and attacks full autonomy system, including perception, prediction and motion planning.



To deploy SDVs safely, we must rigorously test the autonomy system on a wide range of scenarios that cover the space of situations we might see on the real world, and ensure the system can respond appropriately. The industry relies on closed-loop simulation of these scenarios to test the SDV in a reactive manner, so that the effects of its decisions are evaluated in a longer horizon. This is important, as small errors in planning can cause the SDV to brake hard or swerve.

However, coverage testing is typically restricted to the behavioral aspects of the system and only motion planning is evaluated. This falls short, as it does not consider scene appearance and ignores how perception and prediction mistakes might result in safety critical errors with catastrophic consequences. On the other hand, existing works on physically realizable adversarial shapes usually add contrived adversarial shapes to vehicle roof-tops or roadside to harm perception only. In contrast, we are interested in building a framework to search challenging object shapes in a realistic manner for testing the full autonomy system in closed loop.


Given a real-world scenario, Adv3D modifies the shapes of selected actors in a realistic manner, and runs LiDAR simulation and full autonomy stack in closed-loop to evaluate the autonomy performance. Then black-box optimization is conducted to search the challenging actor shapes.

As shown above, to efficiently find realistic actor shapes that harm performance, we parameterize our object shape as a low-dimensional latent code and constrain the search space to lie within the bounds of actual vehicle shapes. Given a real-world driving scenario, we select nearby actors and modify their actor shapes with our generated shapes. We then perform closed-loop sensor simulation and see how the SDV interacts with the modified scenario over time and measure performance. Our adversarial objective consists of perception, prediction, and planning losses to find errors in every part of the autonomy. We then conduct black-box optimization per scenario to enable testing of any autonomy system at scale.

Adv3D Generates Challenging Actor Shapes

Our approach can degrade the subsystem performance and execution performance significantly. Two qualitative examples of optimized adversarial shapes together with autonomy outputs are shown. Left: Adv3D successfully creates a uncommon actor shape (short and wide truck) that degrades the detection performance (low confidence or mis-detection). Right: the closed-loop attack finds a tiny city-car which causes inaccurate detection and prediction for an actor behind the SDV and results in the SDV applying a strong deceleration. Note that the modified actor shape alters the simulated LiDAR such that perception and prediction outputs are harmed even for other actors in the scene.

Importance of Closed-Loop Simulation

We also compare our approach against optimizing actor shapes in the open-loop setting. In the open-loop shape attack, the ego vehicle follows the original trajectory in the recorded log, and we optimize the actor shape with the same objective. We then test the optimized actor shapes in closed-loop simulation where the ego vehicle is controlled by the autonomy model. We report various metrics to evaluate the module-level performance, including detection (AP and Recall), prediction (ADE) and planning (lateral acceleration and Jerk). To evaluate how the SDV executes in the closed loop, we evaluate system-level metrics driving comfort (i.e., lateral acceleration and jerk on the executed 5s SDV trajectory).

As shown in the following table, generating adversarial objects in closed-loop simulation yields substantially worse autonomy performance compared to open-loop. This indicates that it is insufficient to study adversarial robustness in open-loop as the attacks do not generalize well when the SDV is reactive.

Importance of Attacking Full Autonomy

Our adversarial objective takes the full autonomy stack into account. To demonstrate the importance of attacking the full system, we compare with three baselines inspired by existing works that each only attack one module. As shown in the following table, attacking each downstream module produces challenging objects that are only risky to that module. In contrast, our model effectively balances all tasks to generate worst-case 3D objects that challenge the entire autonomy stack, serving as a holistic tool for identifying potential system failures. We mark the methods with best performances using gold, silver, and bronze medals.

Limitations and Conclusion

Adv3D’s main limitation is that we do not optimize the actor behaviors like prior work (e.g., AdvSim) to allow for more diverse adversarial scenario generation. Moreover, how to incorporate Adv3D generated safety-critical objects to create new scenarios for robust training remains future study. While our shapes are more realistic than prior work, we also observe occassionally convergence to shapes that have artifacts or oblong wheels. Better shape representations (including for nonvehicle classes) and optimization approaches (e.g., multi-objective optimization), can help create higher-fidelity and more diverse adversarial objects more efficiently.

In this paper, we present a closed-loop adversarial framework to generate challenging 3D shapes for the full autonomy stack. Given a real-world traffic scenario, our approach modifies the geometries of nearby interactive actors, then run realistic LiDAR simulation and modern autonomy models in closed loop. Extensive experiments on two modern autonomy systems highlight the importance of performing adversarial attacks through closed-loop simulation. We hope this work can provide useful insights for future adversarial robustness study in the closed-loop setting.


  title={Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop Simulation},
  author={Jay Sarva and Jingkang Wang and James Tu and Yuwen Xiong and Sivabalan Manivasagam and Raquel Urtasun},
  booktitle={7th Annual Conference on Robot Learning},