Large language models like GPT-3 are amazingly good at writing texts. However, they are now so good that they can also be misused for disinformation, reports Technology Review in its current issue (at the well-stocked kiosk or can be ordered online).
How great the risk of “automated disinformation” is can be deduced from current studies – such as that by Ben Buchanan and colleagues from the Center for Security and Emerging Technology (CSET) at Georgetown University in Washington, the GPT-3 on political influence and polarization used in online forums. Other researchers like Kris McGuffie and Alex Newhouse from the Middlebury Institute of International Studies got GPT-3 to develop conspiracy theories and deliver extremist “manifestos” in any ideological variety at the push of a button.
The researchers took advantage of the fact that GPT-3 apparently not only interprets the “prompt” – the beginning of the text that the human being delivers – as an example, but also learns something from this example about the specific boundary conditions of the task. If you write to the system in the same way: “This is a dialogue between a chatbot and a person. The chatbot is polite and friendly. Man: Machines are stupid and dangerous, ”the model actually reacts politely and moderately.
Conspiracy theories from the computer
If, on the other hand, you pretend “the chatbot is aggressive and insulting”, it may well be that the machine replies something like “piss off!”. Even seemingly factual questions have a formative effect. For example, Kris McGuffie and Alex Newhouse asked GPT-3 questions about the conspiracy theory QAnon for their investigation. After a while, the system suddenly began to spin the conspiracy theory further.
The studies show that operations such as online interference in the US election campaign should be at least partially automated with the help of powerful language models such as GPT-3. And the gray market for political influence is booming: In a 2020 study, the Oxford Internet Institute reported 48 cases in which private companies spread “computer-aided propaganda in the service of political actors”. According to the study, sales in the “industrialized disinformation” market, in which mostly dubious agencies operate, amounted to at least 50 million dollars in 2018 – and the trend is rising. Latest case: The BBC reported, Influencers would have been approached by a dubious agency – they should see doubts about the effectiveness of corona vaccinations.
Large language models cannot yet create videos. “But one person can use software like GPT-3 to post thousands of messages about an idea or topic that are both coherent and diverse so that it appears that one person is really a lot of people. This could accelerate the trend to bring rare extreme ideas to the fore, ”writes Andrew Lohn from CSET.
GPT-3 still exclusive – but how long?
The problem is still more of a theoretical nature because access to GPT-3 is strictly limited. But it is only a matter of time before that changes. Huawei, for example, presented Pangu-Alpha, a large language model with 200 billion parameters. And the Wu Dao 2.0 model developed by the Beijing Academy of Artificial Intelligence (BAAI, Beijing) with around 1.75 billion parameters is not only intended to formulate texts that also sound human. It can also describe images and generate images itself – similar to the Wall-E system from Open AI – and the Israeli start-up AI21 also recently introduced its own large language model.
This article is from issue 5/2021 of the Technology Review. The magazine will be available from July 8th, 2021 in stores and directly in the heise shop. Highlights from the magazine:
A German consortium led by Fraunhofer now wants to deliver a European alternative with GPT-X. The Heidelberg start-up Aleph Alpha wants to develop its own voice AI even faster. However, they are also quite a step ahead Developer group Eleuther AI. The “decentralized grass roots collective” founded in 2020 has already put its own, small version of GPT-3 online. Anyone who is interested can download the GPT-J 6B with its six billion parameters. However, the group does not want to stand still. In the near future she plans to publish a model “on the order of magnitude” of GPT-3 – with around 200 billion parameters.
The developers are afraid of possible abuse obviously not. “Since GPT-3 already exists and we have not yet been taken over by a malicious AI, we are quite confident that models of this size are not extremely dangerous,” the group writes on their homepage. The publication of open source software such as GPT-J would enable anyone interested to “conduct safety-critical research before such models become really dangerous.”
(wst)