Photo of British Prime Minister Sunak with beer “in the wrong glass”, or How deepfakes threaten politicians

Original photo of Rishi Sunak (left) and “fake that threatens democracy”

A fake photo of British Prime Minister Rishi Sunak pouring beer “into the wrong glass” has caused outrage among the Conservative Party. And while it’s not clear if the original photo was AI-edited, there’s been a renewed debate about cracking down on deepfakes and labeling AI-generated images.


California Gov. Gavin Newson has signed a bill to regulate deepfakes. It prohibits publicly posting fake videos of political candidates for 60 days before the start of an election. About it reports Engadget.

In just a couple of years, video face swapping technologies have moved from rather primitive prototypes (we are talking primarily about Reddit user deepfakes, who in 2017 taught the neural network superimpose frames on the faces of actresses in porn videos) to high-quality algorithms that recognize the results of which becomes everything is more difficult. Of course, the efficiency of such algorithms increases the likelihood that deepfakes will be used for harm, which is especially dangerous for public people like politicians, since a large number of videos with their participation are often available on the Internet that can be used for training sample.

The signed bill is the first known law restricting the distribution of deepfakes on the Internet. Action bill restricts the distribution of deepfakes featuring political candidates in the form of audio, video and photos for 60 days before the election. The law will apply both to materials in which the faces and voices of politicians are substituted, and to materials in which other people’s faces and speech are superimposed on the images of politicians. At the same time, the bill specifies that frames or videos with politicians that use materials involving other people must be accompanied by a warning. The law will be in effect until January 1, 2023.

It is still unclear how exactly the substitution in such materials will be determined – automatically or not – but large IT companies are already creating their own datasets for training algorithms that determine the substitution of faces in the video. So, datasets to combat deepfakes have already been collected by Google, and Facebook and Microsoft, in addition to the dataset, have announced the start of developer competition.

“Sphere”, 06/15/2020, “Fake Future: International Experience in (Non) Regulation of Deepfakes”: In most countries of the world there is no clear regulation of deepfakes, although their danger is widely spoken about. As “positive” example we can cite China, where from January 1, 2020, deepfakes must be properly labeled, and this rule applies to both manufacturers and the online platform where this video or audio recording will be published. How exactly the authorities will distinguish real videos from high-quality fakes is not specified, but violators face criminal liability.

Such measures were not taken without reason. Deepfakes can not only undermine the reputation of anyone – be it a famous or average person – but also threaten national institutions or even national security, because political leaders are not protected from deepfakes. So, in the 2019 elections, the British politician Boris Johnson invited voters to support his rival in the fight for the post of prime minister of the country – Jeremy Corbyn. Corbyn, in turn, invited the audience to support Johnson. two videos indistinguishable from the real ones – facial expressions and voices exactly match Johnson and Corbin. However, in fact, these are fakes created by the organization Future Advocacy to show the impact of new technologies on democratic processes. — Inset K.ru

***

The original of this material

© “News”06/23/2023

Distortions are not accepted: deepfake will be legally defined in the Russian Federation

Alena Nefedova

The official concept of “deepfake” will appear in the Russian legal field. A bill on this is planned to be submitted to the lower house in the autumn session, Izvestia was told in the State Duma Committee on Information Policy. […]

neurohazard

Deputy Chairman of the Committee Oleg Matveychev explained: today people are increasingly using the neural network primarily for entertainment, placing photos in interesting scenery to see themselves or loved ones in a new way. However, within 3-5 years, this mechanism will grow, and fraudsters will join its use. […]

The politician also predicts the risks of discrediting: attackers can use photographs of a person, “attributing” to him on the video a bribe, or simply add his image to a frank publication. According to him, it is supposed not only to give a legal definition of such a phenomenon, but also to think over regulatory mechanisms: how to track who is responsible, because “one cannot sue a neural network,” he added. Against the backdrop of the urgency of the problem, the document is being prepared for submission to the State Duma already in the autumn session this year, the parliamentarian specified.

Previously Izvestia has already written about such an initiative, then the LDPR faction proposed to introduce criminal penalties for deepfakes in financial fraud using the substitution of a person’s voice or image. On June 4, it became known that the government of the Russian Federation did not support the bill, referring to the fact that the illegal dissemination of a citizen’s personal data, including the use of artificial intelligence, falls under Art. 137 of the Criminal Code of the Russian Federation (“Violation of privacy”).

The Reality of the Threat

Such networks consist of two parts: while one of them generates incoming materials from the user, the second filters out parts of the downloaded content that are not similar to the original data, explained data processing specialist Dmitry Kaplun. As a result, this leads to sufficiently high-quality images: because the objects were not added to each other separately, but were created together as a single picture based on the data presented. […]

“Even experts have a hard time distinguishing a real shot from a deepfake. One of the ways remains their “flaw” – poor reflection transmission, for example, in mirrors, windows, or even in the pupil of the eye. They automatically “idealize the picture”, even removing red eyes, and in real photographs it’s all there. You can also evaluate by quantitative criteria, if you process the photos in an appropriate way, you will be able to calculate their metrics, and then we will see the difference, but soon their number will no longer differ. This is a laborious process, – Dmitry Kaplun explained to Izvestia. […]


Source