Skip to main content
Welcome to the cutting edge of safety science—Learn more about our rebrand.
  • Feature Story

Advancing the Future of Safety and Artificial Intelligence

As part of UL’s work to help shape the future of artificial intelligence (AI) into one that can be trusted, we joined the Partnership on AI (PAI) in order to make sure AI benefits people and society.

A hand holds a smartphone while the other hand converses with a chatbot.

August 13, 2019

In 2018, Senior Vice President and Chief Digital Officer Christian Anschuetz asked members of UL's executive leadership program (ELP) to look at the role of machine learning in artificial intelligence.

His big ask? To discover where these technologies are going and how to use them for growth.

“We started by thinking about digital disruption and how we have the entire world in front of us,” said UL's Vice President of Finance and 2018 ELP member, Darrell Carpenter. “We just started calling and talking to people  having these kinds of blue-sky conversations to think of things in a different context. We focused on being a value center and building a safer world.”

Other members of the 2018 ELP program included:  Global Account Director Maria Closs, UL Benchmarks’ Managing Director Jukka Makinen, International Marketing Director Grainne Styles, Sales Director for Environment and Sustainability Donald Mayer.

Existing group? Or, start from scratch?

Eventually, those conversations led the group to Francesca Rossi, IBM’s global leader for AI ethics. After speaking with Rossi, who also sits on the board of directors for PAI, the ELP members expanded their group to include colleagues from UL's Research, Standards and Education team. Ultimately, PAI stood out for its focus on developing and sharing best practices, advancing public understanding and providing a platform for discussion and engagement — principles that aligned with not only the goals of UL but UL's not-for-profit as well.

Since joining PAI, Director of Data Science David Wroth has led the effort to facilitate engagement between the two organizations.

“I give Dave huge kudos that he’s latched onto this with such vigor,” Carpenter said. “I am really starting to see us contribute. PAI is now starting to use the same developmental system that we use to create our Standards. I believe we’ve achieved the initial mission that Christian asked of us.”

Work, work, work

Currently, UL is using its knowledge of safety to take part in the Safety-Critical AI Working Group, where UL has been working on projects such as Standards development, an AI Incident Database and an AI Safety Primer.

UL’s Standards Program Manager, Deb Prince, is collaborating broadly across all the working groups to anticipate best practices and to figure out how UL’s Standards may be able to reflect the findings of PAI.

Is there a cobot in your future?

“One common area of interest is to ensure that AI systems are implemented into workplaces safely,” Wroth said. “In the case of a collaborative robot (cobot), we want to find out if the robot is safe to use in areas where they manage work that is then executed by people. We look to make sure AI applications don’t create new physical or new psycho-social hazards.”

Wroth explained that the plan is to widen UL's reach by having its experts cooperate with PAI. Mary Burton, user experience director of Emergo by UL, was nominated to assist with the Humans and AI working group, while Data Science Research Manager Andrew Kapp was appointed to support the group AI, Labor and the Economy.

You can learn more about PAI by visiting its website, partnershiponai.org. Also, follow @UL_Datascience on Twitter to interact with us and get the latest news on UL’s data science work.