With more self-driving car testing taking place across the world than ever before, incidents like the Uber crash in Arizona are likely to keep happening until the technology is deemed road safe. That’s a tricky proposition for most companies which can’t build private testing areas like Google’s closed off community.
Enter Nvidia. Announced today at its GPU Technology Conference (GTC) in San Jose, the AI and GPU company revealed it has built a cloud-based autonomous vehicle testing facility using state-of-the-art photorealistic simulations. Instead of building a closed-off test track, developers can test all the systems of their autonomous vehicle completely within a simulated version of the real world.
Labelled as Nvidia Drive Constellation, the system works across two different servers used in tandem to create a virtual representation of the test car. One server simulates all of a self-driving vehicle’s sensors (Lidar, radar, cameras etc.) via Nvidia Drive Sim, while the other runs on an Nvidia Drive Pegasus AI computer found in real self-driving vehicles. The Pegasus AI component decodes all the information coming in from the virtual car’s sensors and processes it as if it were real data coming from a car driving on the road itself.
“Deploying production self-driving cars requires a solution for testing and validating on billions of driving miles to achieve the safety and reliability needed for customers,” said Rob Csongor, VP and GM of Automotive at Nvidia. “With Drive Constellation, we’ve accomplished that by combining our expertise in visual computing and data centres. With virtual simulation, we can increase the robustness of our algorithms by testing on billions of miles of custom scenarios and rare corner cases, all in a fraction of the time and cost it would take to do so on physical roads.”
The whole process does have a tinge of Black Mirror to it; Drive Constellation essentially tricking an autonomous car’s “brain” into believing it’s in a real world instead of a simulation by prodding its sensor inputs with simulated inputs. Think of it in the same way The Matrix blew our minds in 1999 by insinuating our entire world was false. This is what Nvidia is doing to autonomous vehicles to help train them without taking them on the road.
In practice, Drive Constellation works by building a simulation via Nvidia’s GPUs to produce a stream of information for each of the car’s sensors. This data is then fed into the Drive Pegasus AI server for processing. Pegasus then uses that processed information to feed the car’s behaviour back into the simulator. According to Nvidia, this process occurs 30 times a second allowing Pegasus to operate the simulated vehicle correctly.
The simulation all runs within a photorealistic environment that not only lets testers view what the car can see, but allows for dynamic testing of environments and situations. You can bring in different weather conditions, such as rainstorms and blizzards; alter the sun’s position to cast glare across the road or directly into the car’s sensors and even switch up road terrain and surfaces. Developers can even add in scripts for dangerous situations to be simulated to see how the autonomous car would react, without putting animals or people in harm’s way.
Here’s a demo from CES 2018 showing off the Drive Sim technology during Nvidia’s keynote.
At CES 2018, Nvidia also revealed that Drive Sim could let testers drive simulated cars in real-time, meaning a virtual autonomous car could learn how to react around real-life drivers without even needing to put a single tyre on a real road.
“Autonomous vehicles need to be developed with a system that covers training to testing to driving,” said Luca De Ambroggi, research analyst and director at IHS Markit, in a press release. “Nvidia’s end-to-end platform is the right approach. Drive Constellation for virtual reality testing and validating will bring us a step closer to the production of self-driving cars.”