EU study: Deepfakes endanger democracy

Share your love

Deepfake techniques have the potential to cause numerous societal and financial damage. The spectrum ranges from the manipulation of democratic processes to interruptions in the financial, judicial and scientific systems. Researchers warn of this in a study that has now been presented and commissioned by the European Parliament’s Technology Assessment Committee (STOA).

The scientists understand deepfakes to be increasingly realistic photos, audios or videos in which people are placed in new contexts with the help of artificial intelligence (AI) systems. Often words would be put into the mouth of third parties that they never said in this way.

Enable such computer-generated manipulations according to the investigationwho headed the Dutch Rathenau Institute and in which the Karlsruhe Institute of Technology (KIT) and the Fraunhofer Institute for Systems and Innovation Research (ISI) from Germany participated, “all kinds of fraud”. Often it is about identity theft. Individuals – especially women – are thus exposed to an increased risk of defamation, intimidation and blackmail. The technology is currently mainly used to exchange faces of victims with those of actors in porn videos.

The evaluation of the underlying AI techniques for deepfake videos, audio and text synthesis shows for the experts that these are developing rapidly and are “becoming cheaper and more accessible every day”. Several trends are helping to create a favorable environment for deepfakes. These included the change in the media landscape due to platforms such as social networks, the growing importance of visual communication and the increasing spread of disinformation.

Read Also   Apple will not use cryptocurrencies yet, but is studying them

“Deepfakes find fertile ground in both traditional and new media because they are often sensational,” the researchers sound the alarm. “What is worrying is that pornographic deepfakes without consent are almost exclusively aimed at women.” The risks would also have “an important gender-specific dimension”.

Deepfakes could be used in combination with political microtargeting techniques, the authors work out. Such targeted campaigns are usually particularly effective. They would have direct psychological consequences for the target person. It would also be possible not only to harm a politician personally with a bogus video, but at the same time to influence the electoral chances of her party and ultimately to damage trust in democratic institutions as a whole.

Forged audio documents could also be used to influence court proceedings or discredit, according to the analysis. Ultimately, the judicial system is threatened.

“We are dealing with a new generation of digitally manipulated media content that has been cheaper and easier to generate for a number of years and, above all, can look deceptively real,” explains co-author Jutta Jahnel from the Institute for Technology Assessment and Systems Analysis (ITAS) at KIT. In principle, the AI ​​methods in the background also opened up new possibilities for artists, for digital visualizations in schools or museums, and help in medical research. Ultimately, however, it is a technology with a dual purpose (“dual use”), which should be regulated accordingly.

According to the researchers, technical solutions are not enough. In principle, automatic recognition software could be based on a combination of recognizable indications such as speaker and face recognition, the analysis of the liveliness of the voice, temporal discrepancies and visual artifacts or the lack of authentic indicators. The performance of relevant algorithms is often measured on the basis of a common data set with known fake videos. Even simple changes in deepfake production technology could drastically reduce the reliability of detectors. High compression rates for the distribution of relevant material, for example via social media, made their work more difficult.

Read Also   Study: No effect on male fertility after mRNA vaccination

The planned EU rules for AI offer an option to reduce some of the recorded risks, according to the team. Deepfakes would be expressly recorded in the EU Commission’s proposal and would have to meet certain minimum requirements, e.g. be labeled. But they did not fall into the high-risk category. It therefore remains unclear whether they should be banned.

The researchers also appeal to EU lawmakers to start spreading deepfakes on Facebook, Twitter, YouTube & Co. and media groups. For example, the Digital Services Act (DSA) could oblige you to use detection filters and restrict the distribution of corresponding images. Ultimately, the “audience dimension” is decisive. The labeling of trustworthy sources and the increased promotion of media skills could help here.

“Audiographic evidence must be viewed with greater skepticism and must meet higher standards,” the researchers concluded. Individuals and institutions “will need to develop new skills and methods to construct a trustworthy picture of reality as they will inevitably be confronted with deceptive information”.


(olb)

Article Source

Share your love