May 14, 2026
Key Points
- Evidence-based testing supports responsible AI innovation.
- UL 3115 is a framework offering clear criteria for issues such as privacy, fairness and security.
- UL Solutions views safety as being the foundation for innovation.
The rise of Artificial Intelligence (AI) has sparked concerns about transparency, fairness, trust and safety. Dr. Robert Slone, senior vice president and chief scientist and innovation officer at UL Solutions, is an authority on this rapidly evolving technology. Slone, who recently was named an affiliated faculty member at the University of Notre Dame’s Lucy Family Institute for Data & Society, sat down to answer a few key questions about AI.
What role does UL Solutions play in the safety of AI?
Dr. Robert Slone: UL Solutions provides independent, evidence-based evaluation and certification of AI-enabled products and systems, which means that we help organizations demonstrate that their technology meets defined safety and trustworthiness requirements. Our role is not to declare AI “safe” in a broad sense, but to assess how a specific product or model behaves in a specific context. This is an important distinction. We look at things like its transparency, privacy protections, fairness considerations, security controls and how the system or product is governed across its life cycle.
In their current form, we’re delivering these services entirely through document review; physical testing is not taking place. Customers submit the AI documents that demonstrate how they have built their AI systems and AI-enabled products. And because AI systems learn and adapt over time, we emphasize ongoing monitoring and recertification. So, this is not a one-and-done process.
UL Solutions is well-known for its testing, inspection and certification services and the reputation of the UL Standards it helps to develop. Can you tell us a little about UL 3115, and its role in AI safety?
UL 3115 is something we call an Outline of Investigation — an OOI for short. In this case, UL 3115 is our new OOI for Safety of AI-Based Products, and it provides a framework that outlines the certification requirements for AI-driven innovations. Eventually, something like UL 3115 may become a UL Standard, but in the meantime, one of the big advantages of an OOI is that it can be updated quickly, which is critically important in a fast-moving field like AI.
We’re proud to be a global leader in safety science. So, we use frameworks like UL 3115 to help organizations move beyond abstract promises and toward measurable, documented safeguards. Ultimately, our role is to provide credible trust signals that support responsible innovation and increase confidence in the AI-powered products entering the market. UL 3115 is helping us do that.
Could you explain “evidence-based evaluation” a bit more?
Sure. It means grounding claims in measurable controls, not broad assurances. At a basic level, we test how a system responds under different conditions, and we review data practices, verify the transparency of a model’s behavior and evaluate how humans can oversee the system. We also look closely at life cycle management, like how the system is updated, monitored and governed over time.
I want to point out that clarity about limits is equally important. Achieving certification means that a product has met defined requirements at a particular point in time, for a specific version and use case. It does not imply the technology is risk-free or permanently approved. Surveillance and recertification are key because evaluation needs to keep pace with change. There’s no better example of that than the AI field right now, where the technology is evolving at a high speed.
How do you view the relationship between AI innovation and the need for safety?
As history has proven, safety is the foundation for innovation. In the past, technological innovations like electricity, aviation and biotech only scaled after credible oversight was in place to create public confidence. We’re now at a similar moment with AI.
By providing measurable, transparent evaluation, our goal is to enable organizations to design and deliver AI systems with intention. That clarity enables innovation to move forward responsibly. When companies understand their risks, document their controls and build governance structures that evolve with the technology, they’re positioned to deploy AI that is both impactful and trustworthy.
Information about UL Solutions’ services to evaluate and certify AI-enabled products is available at the UL 3115 service page at UL.com.