From the moment autonomous vehicles (AVs) took shape, even only in theory, they promised a revolution in transportation. Today, these promises are at the forefront with artificial intelligence (AI), and while the list of benefits varies as technologies and priorities shift, three central ideas have remained foundational:
- Roads will become safer as accidents are reduced.
- Driving will become more equitable due to age and mobility issues no longer deterring vehicle use or ownership.
- The environment will benefit thanks to reduced traffic congestion which will, in turn, reduce emissions.1
These three potential benefits alone would change the way the world views transportation, but one significant challenge remains: trust.
How can AI be trusted to operate safely in an unpredictable world? To answer that question, it is essential to understand how and where AI already exists in the industry, both on and off the road, and what work is being done to help support safety.
The growing influence of AI in vehicle development
The automotive industry is shifting from developing traditional motor vehicles to developing software-defined vehicles (SDVs). Research from the IBM Institute for Business Value (IBV) found that nearly three quarters of the vehicles coming off the assembly line in the next decade will be software-defined and AI-powered.2 Today, AI is being integrated into the entire vehicle development life cycle, from design and testing to real-time functions like perception, decision-making and control.
At the vehicle level, however, the use of AI is challenged by infrastructure. Reliable, high-bandwidth, in-vehicle connectivity is still not guaranteed, complicating connections to the cloud. Edge AI — smaller, localized AI models that process critical workloads at the edge (in the vehicle) — helps overcome this issue by handling less computationally intensive tasks, such as simple voice commands, to free up the use of cloud-based AI for more data-heavy workloads, such as large-scale perception, prediction or fleet learning.3
Xpeng, a leading Chinese electric vehicle (EV) maker, is trying to take this one step further by developing an AI chip in house that will enable more compute-on-board capabilities to reduce latency and dependency on cloud capabilities and external suppliers.4
To support growth, development and innovation in AI for automotive use, along with safer implementation, governments are also increasing their involvement. Taiwan, for example, is explicitly pushing to integrate AI into automotive electronics and systems. The government recently launched an AI Automotive Industry Alliance to connect research institutions, automotive parts suppliers, AI and software firms, and original equipment manufacturers (OEMs).5
The Korean government, in recognition of AI’s prominence, declared low-power AI chip design for autonomous vehicles as national strategic technologies, a move intended to increase government support.6
Going beyond traditional standards to establish AI safety
One fundamental safety-related challenge with AI is that it is non-deterministic, meaning it does not perform like traditional, rule-based software. This means it may not produce the same output for the same input, which complicates validation. It can also complicate trust. According to the 2025 State of Automotive Software Development Report, nearly half (49%) of the automotive development professionals surveyed list “safe decision-making for AI algorithms in autonomous/semiautonomous vehicles” as their leading concern in AI vehicle development.7
Traditional vehicle safety standards were developed for deterministic, rule-based systems and remain essential within the industry. For example, Functional Safety (ISO 26262) is a foundational vehicle safety standard. It focuses on preventing hazards caused by hardware or software failures, but because it assumes system behavior is deterministic, predictable and testable through traditional verification methods, it is not designed to handle non-failure-based risks that arise in AI systems.
These systems, especially neural networks, exhibit non-determinism and probabilistic decision-making, where errors can occur even when all components are functionally safe. These “misbehaviors without fault,” such as an AI model performing incorrectly without any internal hardware/software fault (i.e., it is functioning “as designed” but mis-predicts), fall outside the scope of ISO 26262.8
Therefore, Safety of the Intended Functionality, or SOTIF (ISO/PAS 21448), is a critical standard for AI as it handles unknown and unexpected scenarios, known as “edge cases,” where system limitations could lead to a hazard.
Road vehicles — safety and artificial intelligence (ISO/PAS 8800) also addresses the safety of AI-driven systems in road vehicles. It provides a framework for enhancing the safety of AI behavior. According to the International Organization for Standardization:
“This document addresses the risk of undesired safety-related behavior at the vehicle level due to output insufficiencies, systematic errors and random hardware errors of AI elements within the vehicle. This includes interactions with AI elements that are not part of the vehicle itself but that can have a direct or indirect impact on vehicle safety.”
Although AI integration is increasingly necessary to deliver on consumer demands and expectations, delays in the development of innovative new systems can hinder launch, and launching a flawed product can lead to recalls, damage to brand reputation and financial losses. Automotive development professionals are already aware of these risks, with 63% concerned about delivering innovative software on time and avoiding recalls/delays.6 Additionally, 11% note this challenge as extremely concerning, further highlighting a growing awareness of the challenge OEMs and suppliers face.
To help alleviate these concerns and support the growth of technology, UL Solutions offers consulting and training services for the automotive industry. Our team can help at various stages of the development process, including providing safety analyses, such as Systems Theoretic Process Analysis (STPA), and supporting the development of holistic safety cases by leveraging UL 4600, the Standard for the Evaluation of Autonomous Products, ISO 21448, ISO PAS 880 and other emerging vehicle safety standards and specifications.
The security layer: protecting the connected car
A compromised system cannot be a safe system. Autonomous vehicles are hyper-connected, introducing a new range of cyber threats, and security is a prerequisite for safety. Nearly 40% of development professionals rank the goal of avoiding vulnerabilities and cyberattacks with the introduction of advanced AI technologies as very concerning and another 26% find it extremely concerning.6 The growing list of potential risks makes it easy to understand why.
For example, light detection and ranging (LiDAR) systems can be manipulated to identify fake objects, leading to unnecessary braking. Similarly, a laser and lens apparatus has been found to confuse the system, causing it to conceal existing obstacles, risking a potential collision.9 Another vulnerability in AVs is data manipulation. This can be as simple as manipulating GPS data to provide incorrect directions or something more malicious, such as updating the firmware on a vehicle’s electronic control unit (ECU) to introduce potentially dangerous activity.10
The ISO/SAE 21434 standard provides a framework for managing these types of cybersecurity risks throughout a vehicle's life cycle. Demonstrating compliance with this standard helps build confidence internally while also demonstrating confidence to customers and consumers.
Solving the AI trust equation
Ultimately, the future of autonomous mobility depends on solving the trust equation. That is, when trustworthy AI is the foundation of safer and more secure mobility. As the use of this technology grows, however, so do the challenges. The industry must accelerate its shift from a mechanically driven approach to a software focus. Manufacturers and suppliers who lag in this transition may risk falling behind in the market.
One approach to becoming software-ready is by working to address skill gaps and evolving processes to accommodate new needs and challenges. We bring an extensive history of supporting the automotive industry with both safety and performance services, and UL Solutions Software Intensive Systems (SIS) can help support readiness as software becomes the dominant feature in future vehicles. We can work with your teams to help you build new skills and develop holistic safety cases and work products based on established and emerging safety standards so you can approach the use of AI in vehicles and vehicle development with greater confidence.
The realization of the many potential benefits of vehicle autonomy requires software centrality. However, for these efforts to gain traction, safety must remain paramount at every stage to help support ongoing trust, from development to production and use.
References
Salvini, P., Kunze, L., Jirotka, M. "On self-driving cars and its (broken?) promises. A case study analysis of the German Act on Autonomous Driving." Technology in Society.https://www.sciencedirect.com/science/article/pii/S0160791X24001763.
"Automotive in the AI era." IBM Institute for Business Value. https://www.ibm.com/downloads/documents/us-en/12bb2f911fbbaca3.
"Automotive in the AI era." IBM Institute for Business Value. https://www.ibm.com/downloads/documents/us-en/12bb2f911fbbaca3.
Ren, D. "Chinese EV maker Xpeng to use own AI chip to power its self-driving cars this quarter." South China Morning Post. https://www.scmp.com/business/china-business/article/3306493/chinese-ev-maker-xpeng-use-own-ai-chip-power-its-self-driving-cars-quarter.
Ninelu Tu, T., & Charlene Chen, D. A. "Taiwan forms AI automotive alliance to build trillion-dollar smart vehicle industry." DIGITIMES Inc. https://www.digitimes.com/news/a20251001PD241/taiwan-automotive-alliance-vehicle-2025.html.
Baker, B. "Quantum, AI Self-Driving Technology Deemed Strategic in Korea." IOT World Today. https://www.iotworldtoday.com/automotive-connected-vehicles/quantum-ai-self-driving-technology-deemed-strategic-in-korea.
Britton, J. "2025 State of Automotive Software Development Report." https://eco-cdn.iqpc.com/eco/files/channel_content/posts/report-sca-automotive-report-2025DXhKT4lZrBg90v06z5WIq4gRaiLhb1Xahm8USEDC.pdf.
Serna, J., Diemert, S., Millet, L., Debouk, R. et al., "Bridging the Gap between ISO 26262 and Machine Learning: A Survey of Techniques for Developing Confidence in Machine Learning Systems," SAE Int. J. Adv. & Curr. Prac. in Mobility 2(3):1538-1550, 2020, https://doi.org/10.4271/2020-01-0738.
Brian Bell. UC Irvine. "Autonomous vehicle technology vulnerable to road object spoofing and vanishing attacks." https://www.universityofcalifornia.edu/news/autonomous-vehicle-technology-vulnerable-road-object-spoofing-and-vanishing-attacks#:~:text=According%20to%20a%20study%20by%20researchers%20at,trigger%20unsafe%20behaviors%20such%20as%20emergency%20braking**.
Islam, T., Sheakh, A., Jui, A.N., Sharif, O., Hasan, Z. "A review of cyber attacks on sensors and perception systems in autonomous vehicle." Journal of Economy and Technology, Volume 1, https://www.sciencedirect.com/science/article/pii/S2949948824000027.
Get connected with our sales team
Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.