Skip to main content

Waabi World is the most scalable, highest fidelity closed-loop simulator to ever exist and the key to unlocking the potential of self-driving technology.

 

It is defined by four core capabilities:

  1. Builds digital twins of the world from data, automatically and at scale;
  2. Performs near real-time high fidelity sensor simulation enabling testing of the entire software stack in an immersive and reactive manner;
  3. Creates scenarios to stress-test the Waabi Driver, automatically and at scale;
  4. Teaches the Waabi Driver to learn from its mistakes and master the skills of driving without human intervention.
Waabi World and its core capabilities: World creation, camera and LiDAR sensor simulation, scenario generation and testing, and learning to drive in simulation

 

Let’s break these capabilities down.

Waabi World builds digital twins of the world from data, automatically and at scale

To be effective, a simulator needs to recreate the real world in high fidelity, in all its diversity and dynamism. Traditional simulators leverage artists and animators to build virtual worlds manually. Artists design CAD models, add textures, and then assign material properties for every individual object, such as trees, buildings, vehicles, pedestrians, etc. Then, they either manually compose these objects together to create a scene or perform simple procedural content generation to create an artificial virtual world.

This process is not only time consuming and cost prohibitive, its output also lacks fidelity and fails to encompass all the objects or scenes we might observe in the real world.

In contrast, Waabi World leverages AI to reconstruct the geometry, appearance, and material properties of real-world objects and backgrounds from sensor data such as LiDAR returns and camera images. This enables us to automatically recreate digital twins of the world from everywhere we drive, with the diversity, scale, and realism of the world we live in.

We can recreate reality as seen. These objects can be used in our sensor simulation system for creating new safety critical scenarios that test the autonomy system.

 

Waabi World performs near real-time high fidelity sensor simulation enabling testing of the entire software stack in an immersive and reactive manner

Recreating the world exactly is an amazing feat in and of itself. But for a simulator to truly replace driving in the real world, the software stack needs to behave the same in simulation as it would in the real world. Waabi World achieves this by simulating how the Waabi Driver would observe or “see” the virtual world through its sensors, just like how it would see the real world. This is the only way to properly test the entire stack in simulation and teach the self-driving “brain” how to drive.

Traditional sensor simulators use physics-based rendering engines that model how light interacts with the artist-designed virtual world and how the sensor receives it. However, it is extremely challenging to accurately simulate all of the different physical phenomena that affect each sensor (such as specular reflections in cameras, spurious LiDAR returns from exhaust and fog, and multi-path returns in RADAR to name a few). Additionally, artist-designed worlds often lack the accurate physical properties needed for fully physics-based simulation, which results in unrealistic sensor data. 

Instead, Waabi World leverages AI along with simplified physics-based rendering to simulate realistic sensor data in near real-time. Our AI algorithms, combined with our high-quality recreated virtual worlds, learn to make the physics approximation look more realistic, while being computationally more efficient than traditional complex physics simulators.

With automatic generation of high-quality objects and virtual worlds, we can not only recreate reality as seen, but also modify it by removing, adding, or changing the behavior of “actors” (including the Waabi Driver) in scenarios and re-simulating the sensors in near real-time. This enables us to create an endless number of diverse worlds for the Waabi Driver to experience—unlocking the ability to realistically test the full software stack in an immersive and reactive manner across interesting or safely-critical edge cases and help the Waabi Driver learn sophisticated driving skills.

We first show an example of the original camera and LiDAR sequence of a real scenario. We then update the scene with a new, fully simulated “actor” performing a lane change (from left to right) and visualize the simulated sensor data. We can do this at scale, taking any existing scenario and modifying it to test the autonomy system.

This groundbreaking capability also means that Waabi World can be employed to test different sensors and their configurations as well as vehicle platforms before they even exist. This is a radical departure for the industry where it is typical to design and build a new sensor configuration, capture real world data, label it and then retrain the software stack on the newly captured data. Oftentimes, many months can pass between new sensor configurations being designed, implemented, and then validated. And it’s not uncommon for the new configuration to include suboptimal choices, therefore requiring the entire process to start again. 

In Waabi World, new sensor configurations can be designed and validated rapidly as we can teach the Waabi Driver how to use them before they even exist in the real world, ultimately allowing for much faster development of self-driving vehicles.

Waabi World can simulate the sensor configuration for a self-driving passenger car or semi-truck platform. The car platform has a single LiDAR sensor and camera. The semi- truck platform has two elevated LiDAR sensors and an elevated camera sensor.

 

Waabi World creates scenarios to stress-test the Waabi Driver, automatically and at scale

Today, the process of testing self-driving vehicles is impractical and time-consuming. Exposing a self-driving vehicle to the sheer volume and diversity of experiences needed to adequately test its skills would be impossible to achieve in our lifetimes with real-world testing alone.

Waabi World uses AI to create traffic scenarios to test the Waabi Driver, generating all sorts of variations, with all sorts of traffic behavior, across all sorts of geographies, automatically and at scale.

Importantly, these are not static scenarios that simply play out like a movie. Driving is an interactive experience, and Waabi World replicates this.

We call this closed-loop simulation. Think of it like a video game, where every action has a reaction. Specifically, the simulator tells the “actors” in the scenario where to go and what to do, the simulated sensors that see the updated world then tell the Waabi Driver what it would observe, and then the Waabi Driver decides how it will react. The simulator then moves the Waabi Driver in the virtual world according to its decision and the other traffic participants react to it. This loop goes on and on.

Closed-loop simulation: Given the current state of the world, we first generate sensor observations, which the Waabi Driver uses to determine its current maneuver.  We update the world state with this information, and then query the other agents in the scene for their planned maneuvers, and update the world state again. This loop keeps going throughout the scenario.

This type of simulation allows the Waabi Driver to truly experience how the scenario would play out if it were in the real world and the Waabi Driver was driving the self-driving vehicle. This is key to truly enabling accurate evaluation of the software stack’s performance.

Waabi World generates diverse and realistic scenarios that test the autonomy system. We can test different capabilities on various map topologies, and our intelligent actors respond dynamically to the Waabi Driver’s behavior. We can also find challenging safety-critical scenarios automatically.

As the Waabi Driver becomes more accomplished, finding a skill deficit is like finding a “needle in a haystack.“ However, it isn’t feasible to evaluate the Driver in all possible scenario variations. There are simply too many.

Waabi World instead utilizes AI to pinpoint the Waabi Driver’s weaknesses and automatically creates adversarial scenarios that our Driver will have difficulty handling. One way to think about this is that Waabi World is deliberately playing against the Waabi Driver, identifying and exploiting its weaknesses while the Driver simultaneously learns its skills. It’s a battle of scenarios and driving skills—one AI system versus another.

It might seem counterintuitive, but we want to see the Waabi Driver fail. We don’t want to wait until we test in the real world to see the system failing. This is far too dangerous.

Waabi World teaches the Waabi Driver to learn from its mistakes and master the skills of driving without human intervention

Building a simulator that can recreate worlds, simulate sensor data, and generate infinite testing scenarios is all in service of one big, audacious objective: to teach the Waabi Driver to learn on its own to drive safely—in any vehicle, in any scenario, anywhere in the world.

Here we show the autonomy system learning to handle challenging lane-merge negotiation scenarios. Initially the novice driver collides with the other actor. But through closed-loop training the driver can learn over time to get better. The intermediate driver brakes and allows the other vehicle to pass. After learning more in Waabi World, the advanced driver realizes the optimal maneuver is to smoothly accelerate slightly, preventing braking while also not causing difficulty for the other actor.

To fully understand and appreciate this capability, we need to return to the analogy of the brain. The human brain has an astonishing capacity to learn. The intuition and instinct that kick in when we get behind the wheel are honed directly from our experiences driving out on the road and the skills learnt along the way. We experience something and immediately learn it. Our brain is changed—literally rewired—after each experience we have.

Waabi World enables this exact same thing to happen, but for a virtual brain: the Waabi Driver. Waabi World not only exposes the Waabi Driver to the vast diversity of experiences needed to sharpen its driving skills (including common driving scenarios and more elusive safety-critical edge cases), it also delivers feedback to the Waabi Driver about its performance after each decision. In contrast to traditional processes that require painstaking manual code adjustments, Waabi World’s feedback system enables the Waabi Driver to learn from its mistakes on its own in an immersive and reactive manner. The Waabi Driver is constantly, automatically learning from its actions to become a smarter driver over time.

This is all happening at scale, with clones of the Waabi Driver learning how to drive safely in different scenarios and in parallel with one another, all updating the same brain which they share. For example, the Waabi Driver could be learning how to drive down a quiet suburban street, on a 5-lane freeway, in the middle of the city during rush hour, and so on—all at the same time.

A revolution in self-driving

A simulator with these four core capabilities working in harmony (recreation of the real world, sensor simulation, stress-testing, and learning) is groundbreaking for self-driving technology. This all takes place within a single simulator, Waabi World, to solve some of the industry’s deepest, most challenging problems.

Self-driving is one of the most exciting and important opportunities in technology today. Once realized and scaled, it will change life as we know it—how we operate businesses, power industries, build cities, and move goods and people.

But if we want to see this realized in our lifetimes, we need to embrace a new approach.

High-fidelity closed-loop simulation powered by AI holds the key to truly solving self-driving at scale and enabling a world where self-driving technology is trusted, safe, and affordable.

Welcome to Waabi World.