All You Need To Know About Self-Driving

Full-Day Tutorial

CVPR 2024 | Full-Day | Tuesday, June 18, 9am Pacific Time
In-person, Seattle Convention Center, Seattle, WA, USA
Room: Summit 445

Watch on YouTube

Overview

A full day tutorial covering all aspects of autonomous driving. This tutorial will provide the necessary background for understanding the different tasks and associated challenges, the different sensors and data sources one can use and how to exploit them, as well as how to formulate the relevant algorithmic problems such that efficient learning and inference is possible. We will first introduce the self-driving problem setting and a broad range of existing solutions, both top-down from a high-level perspective, as well as bottom-up from technological and algorithmic points of view. We will then extrapolate from the state of the art and discuss where the challenges and open problems are, and where we need to head towards to provide a scalable, safe and affordable self-driving solution for the future.

 

Since last year’s instance (https://waabi.ai/cvpr-2023/), countless new and promising avenues of research have started gaining traction, and we have updated our tutorial accordingly. To name a few example, this includes topics like occupancy forecasting, self-supervised learning, foundation models, the rise of Gaussian Splatting and diffusion models for simulation as well as the study of closed-loop vs. open-loop evaluation.

CVPR24 Tutorial Schedule

Session

Presenter

Time

Description

Introduction to Self-Driving

9:00 AM – 9:30 AM

In this section we will give a general introduction of self-driving and review the content of this tutorial.

Hardware and Sensors

9:30 AM – 9:55 AM

Learn about different sensor setups (LiDAR, RADAR, Camera), trade-offs between different kinds of sensors, as well as how to put the sensors together and how to design the associated compute unit.

Perception

9:55 AM – 10:40 AM

In this session we will discuss how to build a robust 3d perception system by exploiting information from different sources, with different sensor fusion strategies. We will also talk about different output representations that have been used for perception and introduce challenges when we deploy the perception system in the real world, such as unknown object recognition, and take into account the system latency.

☕️ Short break

10:40 AM – 10:50 AM

Motion Forecasting

10:50 AM – 11:35 AM

Learn about how and why the future state of the world is forecasted in autonomous driving. We will look into the challenges of this task, different input and output representations, as well as different architectures to tackle this problem.

Motion Planning and Control

11:35 AM – 12:20 PM

In this session, we will discuss various learnable motion planning pipelines, important aspects of the planning problem, and main approaches to control.

🍔 Lunch break

12:20 PM – 1:10 PM

Intelligent Data Mining

1:10 PM – 1:40 PM

In this section, we’ll take a step back from machine learning models and provide a broader overview of the ML development cycle, focusing on the importance of data for training and evaluation. In particular, we’ll cover recent trends in self-driving datasets, techniques for dataset curation, and provide a high-level overview of approaches for evaluating self-driving models.

Vehicle-to-Vehicle (V2V) Communication

1:40 PM – 1:55 PM

In this session, we will discuss how to make self-driving vehicles even safer and better through intelligent communication between connected vehicles as well as infrastructure. We will review existing approaches to V2V communication, their different tradeoffs, and datasets.

Simulation

1:55 PM – 2:55 PM

In this session, we’ll explain the different components required to build a comprehensive simulator for autonomy testing and development. We’ll explain different approaches and recent trends for building virtual worlds, simulating their dynamics, and modelling the vehicle platform interacting within the simulator.

Behavior Modeling

2:55 PM – 3:40 PM

In this session, we’ll explain the different components required to build a comprehensive simulator for autonomy testing and development. We’ll explain different approaches and recent trends for building virtual worlds, simulating their dynamics, and modelling the vehicle platform interacting within the simulator.

🥐 Afternoon break

3:40 PM – 4:00 PM

Mapping

4:00 PM – 4:30 PM

In this session you will learn how and why maps are used in autonomous driving. We will cover the different kinds of map representations that are used by tasks like motion forecasting, motion planning, simulation and explain their trade-offs. We also cover online mapping together with its benefits and challenges.

Localization

4:30 PM – 5:00 PM

This session will help you understand how self-driving vehicles robustly establish their precise position within HD maps in order to leverage them for safe and efficient autonomous driving. We will cover the broad range of approaches to localization, covering topics as diverse as place recognition, map matching, point cloud registration, as well as the nascent field of neural SLAM.

Q & A Panel

5:00PM – 5:15PM

In this section we will give concluding remarks about the tutorial and do a Q&A session covering all the content in the tutorial.