Lifelike vs. compliant: OpenAI prevents convincing GPT-3 chatbots

Share your love

Anyone who builds a chatbot based on the GPT-3 language model that looks too lifelike will inevitably come into conflict with the rules of the provider OpenAI. This is the opinion of the American programmer Jason Rohrer, who developed such a chat application and now had to close it, although – or even because – it was so good that a user communicated with a simulation of his deceased fiancée, reports The Register. According to a newspaper report about it, the application had become more successful and OpenAI had warned that the rules should be adhered to. But that would mean that the chatbot would no longer have worked, says Rohrer. The shutdown followed.

As The Register explained, Rohrer had access to an account to GPT-3 using the chat application called “Project December“Developed. For a fee everyone could try out by far the most powerful language model. You could chat with various bots trained by Rohrer, including” William “who simulated William Shakespeare or” Samantha “based on the application of the same name in the science fiction Film “Her”. If you wanted, you could also train a bot yourself. This is exactly what a user had done with texts from his deceased fiancée, a report on “Project December” resulted in rapidly growing usage figures.

If you have problems playing the video, please activate JavaScript

Because Rohrer’s access limit threatened to run out, he contacted OpenAI, whereupon violations of the terms of use were warned by the company. In order to be able to continue to offer the application, he first had to remove the possibility of training a chatbot himself. In addition, the implementation of a filter was requested so that the bots do not talk about certain topics. Finally, he was urged to introduce automatic surveillance technology in order to look for abusive topics in the conversations of the users. Because Rohrer refused all of this, his access was ultimately cut off.

Read Also   Study: Corona pandemic has devastating impact on IT security

Rohrer, who has meanwhile converted “Project December” to a different language model and is testing it, says that, given the strict requirements of OpenAI, it is not possible to develop any interesting products based on GPT-3. Anyone who wants to advance the development of chatbots would come across this limitation. He thinks OpenAI’s reasoning that such chatbots could be dangerous is laughable. If the startup worries that an AI could make someone choose someone else or kill themselves, it is “hyper-moral”. He refuses to monitor the chats because the conversations between a person and an AI are the most private of all. That is precisely why they are so open. A year ago he was still skeptical about how lifelike such a chat could be, but with “Samantha” he got goose bumps.


Article Source

Share your love