Full-Day Tutorial: All You Need to Know about Self-Driving
Tuesday, June 18, 9am PT | Seattle Convention Center, Seattle, WA, USA
Room: Summit 445
Overview
A full day tutorial covering all aspects of autonomous driving. This tutorial will provide the necessary background for understanding the different tasks and associated challenges, the different sensors and data sources one can use and how to exploit them, as well as how to formulate the relevant algorithmic problems such that efficient learning and inference is possible. We will first introduce the self-driving problem setting and a broad range of existing solutions, both top-down from a high-level perspective, as well as bottom-up from technological and algorithmic points of view. We will then extrapolate from the state of the art and discuss where the challenges and open problems are, and where we need to head towards to provide a scalable, safe and affordable self-driving solution for the future.
Since last year’s instance, countless new and promising avenues of research have started gaining traction, and we have updated our tutorial accordingly. To name a few example, this includes topics like occupancy forecasting, self-supervised learning, foundation models, the rise of Gaussian Splatting and diffusion models for simulation as well as the study of closed-loop vs. open-loop evaluation.
See the tutorial schedule:
Session 1: Introduction to self-driving
Presenter: Sergio Casas
9:00 AM – 9:30 AM
In this section we will give a general introduction of self-driving and review the content of this tutorial.
Session 2: Hardware and sensors
Presenter: Andrei Bârsan
9:30 AM – 9:55 AM
Learn about different sensor setups (LiDAR, RADAR, Camera), trade-offs between different kinds of sensors, as well as how to put the sensors together and how to design the associated compute unit.
Session 3: Perception
Presenter: Sergio Casas
9:55 AM – 10:40 AM
In this session we will discuss how to build a robust 3d perception system by exploiting information from different sources, with different sensor fusion strategies. We will also talk about different output representations that have been used for perception and introduce challenges when we deploy the perception system in the real world, such as unknown object recognition, and take into account the system latency.
Session 4: Motion Forecasting
Presenter: Sergio Casas
10:50 AM – 11:35 AM
Learn about how and why the future state of the world is forecasted in autonomous driving. We will look into the challenges of this task, different input and output representations, as well as different architectures to tackle this problem.
Session 5: Motion planning and control
Presenter: Kelvin Wong
11:35 AM – 12:20 PM
In this session, we will discuss various learnable motion planning pipelines, important aspects of the planning problem, and main approaches to control.
Session 6: Intelligent data mining
Presenter: Andrei Bârsan
1:10 PM – 1:40 PM
In this section, we’ll take a step back from machine learning models and provide a broader overview of the ML development cycle, focusing on the importance of data for training and evaluation. In particular, we’ll cover recent trends in self-driving datasets, techniques for dataset curation, and provide a high-level overview of approaches for evaluating self-driving models.
Session 7: Vehicle-to-vehicle (V2V) communication
Presenter: Siva Manivasagam
1:40 PM – 1:55 PM
In this session, we will discuss how to make self-driving vehicles even safer and better through intelligent communication between connected vehicles as well as infrastructure. We will review existing approaches to V2V communication, their different tradeoffs, and datasets.
Session 8: Simulation
Presenter: Siva Manivasagam
1:55 PM – 2:55 PM
In this session, we’ll explain the different components required to build a comprehensive simulator for autonomy testing and development. We’ll explain different approaches and recent trends for building virtual worlds, simulating their dynamics, and modelling the vehicle platform interacting within the simulator.
Session 9: Behavior modeling
Presenter: Kelvin Wong
2:55 PM – 3:40 PM
In this session, we’ll explain the different components required to build a comprehensive simulator for autonomy testing and development. We’ll explain different approaches and recent trends for building virtual worlds, simulating their dynamics, and modelling the vehicle platform interacting within the simulator.
Session 10: Mapping
Presenter: Andrei Bârsan
4:00 PM – 4:30 PM
In this session you will learn how and why maps are used in autonomous driving. We will cover the different kinds of map representations that are used by tasks like motion forecasting, motion planning, simulation and explain their trade-offs. We also cover online mapping together with its benefits and challenges.
Session 11: Localization
Presenter: Andrei Bârsan
4:30 PM – 5:00 PM
This session will help you understand how self-driving vehicles robustly establish their precise position within HD maps in order to leverage them for safe and efficient autonomous driving. We will cover the broad range of approaches to localization, covering topics as diverse as place recognition, map matching, point cloud registration, as well as the nascent field of neural SLAM.
Q & A Panel
5:00PM – 5:15PM
In this section we will give concluding remarks about the tutorial and do a Q&A session covering all the content in the tutorial.



