Skip to main content
Welcome to the cutting edge of safety science—Learn more about our rebrand.
  • Feature Story

Futuristic Approaches to Iterative Testing Grow with Autonomous Systems Innovation 

With road conditions always changing, autonomous software must be capable of handling even the rarest and most extreme situations to operate safely. Comprehensive testing is needed to give assurance that the systems work to build consumer trust in autonomous technologies.

Graphic of autonomous vehicles running scenarios on AV software.

March 1, 2020

The advent of autonomous systems is bringing forth the most sophisticated software problem in the history of automotive technologies. Continuous integration and regression testing, daily software updates and swaths of parameters are needed to evaluate a system's artificial intelligence complete with its deep learning algorithms and perception software.  

But, why is advancing autonomous technology so tricky? 

Obviously, a fully autonomous vehicle would require a highly sophisticated and extensively tested software to operate safely without a human driver. Road conditions are always changing, so software must be capable of handling even the rarest and most extreme situations, such as wildlife entering the road or severe weather.  

"Tests for software are now being done on a massive scale," said Chad Partridge, CEO of Metamoto, a UL Ventures Portfolio company. Metamoto focuses on testing and simulating a variety of conditions for autonomous vehicle research. "Comprehensive testing is needed to give assurance that the systems work, even though you might have dramatically changed the scenario from one day to the next."  

These scenarios are called edge cases, and while they make up only a small fraction of miles traveled, they are critical to perfect in order to help ensure the safety and reliability of autonomous vehicles.  

However, not all miles are created equal.  

"Some companies talk about using the number of miles performed as a benchmark for safety," Partridge said, "but a lot of that doesn't matter. Most miles are boring or uninteresting. What matters is when something interesting or unique is happening; that's been our focus when creating scenarios." 

How does it work? 

Testing software works by brainstorming possible scenarios that an autonomous vehicle might encounter. This includes everything from maintaining lane position on the highway to reacting to a tire blowout to dealing with distracted pedestrians and everything in between.  

"If you think about it, you can't solely rely on the brute-forcing testing typically performed in the traditional sense," said Christopher Park, principal, UL Ventures. "You have to complement with a lot of virtual testing to make sure you cover the edge cases to help build safety into the system."  

As Partridge puts it, "Every night you're running tests with new software updates across all those highly parametrized scenarios—changing road and hardware conditions, dynamic weather, injecting noise and manipulating sensors—we're exploring the performance boundaries of the software." 

 These changes often affect the passing and failing of the tests. Based on the results, improvements can be made so that autonomous vehicles are better suited to handle the needs of ever-changing scenarios in real life. By making improvements, safety and reliability can be significantly increased, which matters for people and companies using such technologies. 

Why is this relevant? 

Autonomous cars are a recent innovation. As such, many consumers have yet to develop trust in the technology. What will it take to build confidence in self-driving vehicles? What will it take to build confidence in self-driving vehicles? 

Transparency in business has always been important, but it becomes even more critical when introducing a disruptive technology with life or death decisions relying on the functional safety of the software system.  

For Metamoto, transparency means looking beyond the mandatory publishing of disengagement reports, a California Department of Motor Vehicles (DMV) requirement for autonomous vehicle testing on public roads in the state. The California DMV defines a disengagement as any time a test vehicle operating on public roads has switched from autonomous to manual mode for an immediate safety-related reason or due to a failure of the system.  

"Results need to be publicized so that consumers know what's actually happening," Partridge said. "The correlation of how subsets of simulation results are validated against real-world tests gives faith that your simulations are providing something useful. If you're showing success day in and day out at scale, this is going to establish great public trust."  

What do consumers gain? 

Ideas for continuous testing at massive scale are very, very new, according to Partridge. Futuristic approaches, such as iterative testing, are growing within autonomous technology communities.  

"Continuous testing and validation programs will not only be part of future standards, but it's going to create auditable and reportable methodologies that show the safe transition of autonomous systems towards the future," he said. "UL Ventures is doing a great deal to help promote these futuristic approaches. The whole company is committed to participating in autonomous system standards going forward." 

At the end of the day, the consumer gains trust that autonomous vehicles are using reliable software that’s been vetted and has proven to be trustworthy. This matters both to companies that might be involved in the production and distribution of autonomous systems and the end consumers, who want to know that they’re receiving the safest and most reliable service available.

"Metamoto is one of the components in autonomous development that helps establish that reliability," Park said. "The company fits into one of the fundamental trends that I believe in—continuous development and deployment through testing and validation.