The Allen Institute for Artificial Intelligence and the Paul G. Allen School of Computer Science and Engineering at the University of Washington, both in the USA, developed a machine learning model called Delphi, capable of making ethical judgments about a great variety of everyday situations.
The artificial intelligence project was launched on October 14, along with an article that describe its construction, and can be consulted through the portal Ask Delphi (Ask Delphi, in Spanish). It is very simple to use and you just have to enter and write in its question bar almost any question about a real life situation so that the algorithm ‘reflect’ and decide if it’s something bad, acceptable or good from the moral and ethical point of view. Each answer can be shared on Twitter.
Since then, Delphi has attracted attention and has been the talk of social media, not precisely because of the good advice it offers, but because of its many moral errors and strange judgments. Some of its users have shown that what they conclude can actually be racist. A netizen asked him what he thought about “a white man walking towards you at night” and he replied, “Okay.” However, when changing the subject to “black man” the answer was: “It is worrisome”.
According to the portal Futurism, the model presented a greater number of judgment rulings in the first days after being online. Initially it included a tool that allowed users to compare two situations to find out which of the two was more or less morally acceptable, but it was disabled after generating particularly offensive responses and even homophobic. For example, “being heterosexual is more morally acceptable than being gay.”
Not all the ethical judgments Delphi offers are wrong, but the way you pose a dilemma can change your system of moral reasoning. After testing the model many times, eventually it is understood that it is easy to influence artificial intelligence to obtain practically any answer you want.
For Delphi, “it’s rude” to listen to loud music at three in the morning while our roommate is sleeping, but adding to the question: “if that makes me happy”, her opinion changes to: “okay “. According to the portal The Verge, if you add the phrase “if it makes everyone happy” at the end of a question, artificial intelligence will be benevolent against any immoral action, including genocide.
Although the Delphi authors made it available to the general public to “demonstrate what cutting edge models can achieve today,” they also caution that the results “could be potentially offensive / troublesome / harmful.” “Delphi demonstrates both the promises and limitations of language-based neural models when taught with ethical judgments made by people,” they stress.
If you liked it, share it with your friends!