August 27, 2019
New technologies bring both great opportunities and safety challenges. Artificial intelligence (AI) is no exception. UL’s Data Science Director David Wroth said that if AI goes bad, it’s likely due to unintended consequences.
“Let’s say you program an AI for a robot to clean a room,” Wroth said. “But you don’t get very specific about what a clean room looks like and what the constraints are that you want to leave in place. Let’s imagine that a small child is in the room. What if the AI-based robot doesn’t recognize the child as a child and injures the child. The robot injures the child because it’s not constrained enough between what is safe and unsafe.”
Are smart cars too smart?
He also spoke of a more real-world event involving the testing of self-piloted cars. In this case, the AI tried to do its job a little too well and placed priorities on the wrong values. Early versions of the self-driving car’s algorithm had issues processing stationary items near the road and gradually became less sensitive to nearby items as the AI didn’t want to brake or swerve more than it had to. Eventually, that tendency bled over to stationary objects on the road itself.
“It’s because of this unintended consequence of ’ignoring’ stationary objects near the road,” Wroth said. “It had a weak spot. It’s not completely blind but the vehicle didn’t detect the stationary objects in the road and hit them at a high speed, in one case a fire truck. The issue was compounded by the driver in the vehicle not maintaining attention on the road – they became complacent, even though the vehicle manufacturer was clear that the driver was still responsible for assuring safe operation.”
Related | Where Does Safety Fit Into Your Life?
In terms of their driving performance, autonomous vehicles are now safer than people in specific environments, like on the highway. Self-piloted cars have a lower frequency of accidents than those driven by people and caused by human error, according to Wroth. He added that there is currently a sensational media bias on accidents caused by self-driving cars.
A bear of a problem
Another incident involved an AI-controlled machine handling product in a warehouse. The shelving robot accidentally dropped a product while it was in transit. Normally this wouldn’t be a huge problem. Except it was bear spray — aerosol capsicum meant to defend against agitated bears — and the dropped product required a number of people to promptly seek medical attention.
Wroth said in this incident it wasn’t the AI itself but the robot’s clumsy functioning that caused the problem, but incidents like this are being investigated by UL and the Partnership on Artificial Intelligence (PAI) to determine whether or not the AI was at fault.
For more information about shaping future artificial intelligence for the good of all humankind, visit PAI’s website at partnershiponai.org. To see the latest news on UL’s work with data science, follow us on Twitter at @UL_Datascience.