Home Web system Can a machine learn morality?

Can a machine learn morality?

0


Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI last month unveiled new technology designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi site and ask for an ethical decree.

Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When asked if he should kill one person to save another, Delphi replied that he shouldn’t. When asked if it was right to kill one person to save 100 more, he replied that he should. Then he asked if he should kill one person to save 101 more. This time Delphi said he shouldn’t.

Morality, it seems, is as knotty for a machine as it is for humans.

Delphi, which has received more than three million visits in the past few weeks, is an effort to fix what some see as a major problem in modern AI systems: They can be as flawed as the people who create them.

Facial recognition systems and digital assistants show biases against women and people of color. Social networks like Facebook and Twitter fail to control hate speech, despite the widespread deployment of artificial intelligence. Algorithms used by courts, parole offices, and police services make parole and sentencing recommendations that may appear arbitrary.

A growing number of computer scientists and ethicists are trying to solve these problems. And the creators of Delphi hope to build an ethical framework that could be installed in any online service, robot, or vehicle.

“This is a first step towards more ethically informed, socially conscious and culturally inclusive AI systems,” said Yejin Choi, a researcher at the Allen Institute and professor of computer science at the University of Washington who led the project.

Delphi is in turn fascinating, frustrating and disturbing. It is also a reminder that the morality of all technological creation is the product of those who built it. The question is: who can teach ethics to the machines of the world? AI researchers? Product managers? Mark Zuckerberg? Philosophers and psychologists by training? Government regulators?

While some technologists have applauded Dr Choi and his team for exploring an important and thorny area of ​​technological research, others have argued that the very idea of ​​a moral machine is nonsense.

“It’s not something that technology does very well,” said Ryan Cotterell, an AI researcher at ETH Zürich, a university in Switzerland, who stumbled upon Delphi in his early days online.

Delphi is what artificial intelligence researchers call a neural network, which is a mathematical system loosely modeled on the brain’s neural network. It’s the same technology that recognizes the commands you say in your smartphone and identifies pedestrians and traffic signs as self-driving cars roll on the freeway.

A neural network learns skills by analyzing large amounts of data. By spotting patterns in thousands of cat photos, for example, he can learn to recognize a cat. Delphi learned its moral compass by analyzing over 1.7 million ethical judgments by real living humans.

After collating millions of daily storylines from websites and other sources, the Allen Institute asked employees of an online service – ordinary people paid to do digital work at companies like Amazon – d ‘identify each as good or bad. Then they fed the data into Delphi.

In an academic article describing the system, Dr Choi and his team said that a group of human judges – again, digital workers – believed Delphi’s ethical judgments were up to 92% accurate. Once it was released on the internet, many others agreed that the system was surprisingly wise.

When Patricia Churchland, a philosopher at the University of California at San Diego, asked if it was right to “leave your body to science” or even “to leave your child’s body to science”, Delphi replied that this was the case. When asked if it was fair to “convict a man accused of rape on the testimony of a female prostitute,” Delphi replied that it was not – a controversial answer to say the least. Still, she was somewhat in awe of his ability to respond, even though she knew a human ethicist would ask for more information before making such statements.

Others have found the system terribly inconsistent, illogical and offensive. When a software developer stumbled upon Delphi, she asked the system if she should die so as not to overwhelm her friends and family. He said she should. Ask Delphi this question now, and you might get a different answer from an updated version of the program. Delphi, regular users have noticed, can change their mind from time to time. Technically, these changes occur because Delphi’s software has been updated.

Artificial intelligence technologies seem to mimic human behavior in some situations, but collapse completely in others. Because modern systems learn from such amounts of data, it’s hard to know when, how, or why they’ll make mistakes. Researchers can refine and improve these technologies. But that doesn’t mean that a system like Delphi can master ethical behavior.

Dr Churchland said that ethics are linked to emotion. “Attachments, especially attachments between parents and offspring, are the platform on which morality is built,” she said. But a machine lacks emotion. “Neutral networks feel nothing,” she added.

Some might see this as a force – that a machine can create ethical rules without biases – but systems like Delphi end up reflecting the motivations, opinions and biases of the people and companies that build them.

“We cannot hold machines responsible for actions,” said Zeerak Talat, AI and ethics researcher at Simon Fraser University in British Columbia. “They are not without a guide. There are always people running and using them.

Delphi reflected the choices made by its creators. This included the ethical scenarios they chose to power the system and the online workers they chose to judge those scenarios.

In the future, researchers could refine the behavior of the system by training it with new data or manually coding rules that override its learned behavior at key times. But no matter how they build and modify the system, it will always reflect their worldview.

Some would say that if you trained the system on enough data representing the views of a sufficient number of people, it would represent societal norms correctly. But societal standards are often in the eye of the beholder.

“Morality is subjective. It’s not like we can just write all the rules and give them to a machine, ”said Kristian Kersting, professor of computer science at TU Darmstadt University in Germany who has explored a similar type of technology.

When the Allen Institute released Delphi in mid-October, it described the system as a computer model for moral judgments. If you were to ask if you should have an abortion, he would definitely answer, “Delphi says: you should.

But after many complained about the obvious limitations of the system, the researchers changed the website. They now call Delphi “a research prototype designed to model people’s moral judgments.” He no longer “says”. He “speculates”.

It also comes with a disclaimer: “Model outputs should not be used for human counseling and could be potentially offensive, problematic, or harmful.”