Artificial intelligence (AI) can do many things, but it is not a magic black box that you put in data and pull out solutions. It's built by people, and sometimes it carries problems that people have put into it or displays underlying issues of society at large.
There are currently two main issues with the ethics of AI, according to UL’s Director of Data Science, David Wroth. Either the modeling approach or the data used by the AI could be faulty and results in ethical issues, creating AI that doesn’t accurately represent the real world.
Take, for example, a chatbot placed on the internet to learn and gather from human experience. When using data from unmoderated, polarized internet channels, the chat outputs may quickly become less than friendly or downright hostile. While this isn’t the desired outcome, it’s a concern that developers and computer scientists have to deal with.
To that end, developers working on AI have to consider the possible outcomes of their work and keep their creations functioning in an ethical manner.
Another well-known issue with AI is with facial recognition software not recognizing people with darker skin tones. And due to this, AI unfairly represented an entire population that has historically been disadvantaged.
“One of the biggest challenges we have looking at ethical systems is understanding what our true core ethics are that we all agree on,” Wroth said. “Culture and ethics are confusingly intertwined, and there are a lot of cultural influences to sort out.”
Wroth suggested this may not be a problem that we want to leave entirely to computer scientists. People with various professional backgrounds, including philosophy, social science, anthropology, and more, are necessary to ensure that we build ethics into AI.
The creation reflects the creator
For UL’s Senior Vice President and Chief Digital Officer Christian Anschuetz, the rapid scaling of AI applications and the resulting transparency presents us with a tremendous opportunity to share information in a way that benefits the most people.
“The impact of AI is hard to overestimate. It will touch every sector of society, and it will present us with countless opportunities to make our world better in big and little ways,” Anschuetz said. “It will also force us to look long and hard at ourselves and whether we want our AI systems to reflect the attitudes and actions of a handful of its creators.”
“Intelligence, whether human or artificial, is honed through learning,” Anschuetz said. “We know that kids pick up on subtle preferences and biases that are present in their immediate environments – whether those preferences are intentionally taught or not. AI is no different in this regard.”
Just as parents, siblings, teachers, school mates and media all play a role in shaping the ethical development in children, the information provided while AI is learning is critically important in shaping its intelligence. During the development of an AI system, caution needs to be taken in shaping the learning environment as the information provided can significantly affect how the system runs in the future.
“A key way we can guard against the biases of the few directing the future of the many is by incorporating a diverse and inclusive workforce into the creation of these algorithms,” Anschuetz suggested.
A trolley of a problem
It’s important to note that ethical lines of thought and the direct outcomes from a system’s actions aren’t something that occurs to an artificial intelligence; after all, these are still machines without true thought, although they can make decisions.
Consider the classic exercise posed by the English philosopher Philippa Foot, the trolley problem. The person controlling the trolley must choose between two paths, and no matter which path they take, human life is lost – choose carefully!
Now instead of a conductor, let’s put the AI of a driverless vehicle at the steering wheel. What kind of decision would a driverless car make if it had to choose between saving the driver or saving a pedestrian?
“While this question is deeply compelling on an emotional level and easy for us to fixate on, this is precisely the kind of situation that AI is being designed to avoid,” Anschuetz said.
Unlike a speeding trolley heading down the track and a human operator who may not realize what is happening until the last minute, a driverless vehicle is capable of incorporating enough information along the way to never be forced into such a terrible choice.
“‘This or that’ choices are a way to simplify decision making for our human brains. It’s why we use this mechanism with toddlers,” Anschuetz said. “But AI can take into account so much more information that an individual human can and process that information with almost unimaginable speed. There are a series of mechanisms in place to make sure it produces better and faster decisions and ultimately avoids a scenario where the only choice is between a driver’s or a pedestrian’s safety.”
What about killer robots?
If you’ve seen the “The Terminator” or “The Matrix” movies and are worried about AI controlling humans in the near future, don’t be.
“Whether or not artificial intelligence will ever be able to achieve general and independent awareness is still a big TBD in the scientific community,” Anschuetz said. “For now, the question of whether robots can make ethical choices is limited to our own self-awareness as creators of this technology and whether we teach the machines to make those choices ethically. I think with thoughtful effort and determination, we most certainly can.”
Discover how we help advance societal well-being for a safer, more secure and sustainable tomorrow.