
Photo of British Prime Minister Sunak with beer “in the wrong glass”, or How deepfakes threaten politicians .
A fake photo of British Prime Minister Rishi Sunak pouring beer “into the wrong glass” has caused outrage among the Conservative Party. And although it is not clear whether the original photo was edited with the help of AI, it turned around again discussion on the fight against deepfakes and the labeling of AI-generated images.
Threats to Democratic Processes
British artificial intelligence experts have said that images of politicians generated or edited using generative AI like Midjourney or Stable Diffusion “pose a threat to democratic processes.”
The claim comes in the wake of the recent publication of an edited photograph of British Prime Minister Rishi Sunak at the London Beer Festival, from behind a bar, handing a cheap glass underfilled with Black Dub strong beer to a patron. A woman stands next to him and looks at Sunak with a mocking expression.
Very soon it turned out that the photo had been heavily edited: in fact, the glass was branded, the beer was filled to the brim, and the expression on the face of the woman standing next to him was completely neutral.
However, the edited image was first posted on social media by Labor MP Karl Turner and later by other members of the party.
Conservatives, in turn, exploded in indignation. Deputy Prime Minister Oliver Dowden called the image unacceptable and urged members of the Labor Party to remove the fake photo because it is “misleading”.
A future that few will like
Experts warn that the current situation is an indicator of what could happen in next year’s election race, and that while it’s not entirely clear whether the prime minister’s photo was used, such programs have made it faster and easier to produce fake ( and at the same time convincing) texts, photos and audio.
University of Southampton computer science professor Wendy Hall said that the use of digital technologies in general and AI in particular is a threat to democratic processes, and urgent measures should be taken to minimize the risks in view of the imminent US elections in the UK.
Shweta Singh, assistant professor of information systems and management at the University of Warwick, in turn, stated the need to establish a set of ethical principles that will allow people to understand that they see trustworthy information in the news, and not fakes.
She was echoed by Professor Faten Ghosn, head of the Department of Public Administration at the University of Essex, who noted that it is the duty of politicians to inform voters about the use of edited images. That is, simply put, to mark images altered with the help of digital technologies so that there are no illusions that they are real.
A bill based on the same idea has already been put forward in the US.
The Labor politicians seem to have decided to insist that nothing terrible is happening.
“The main question is: how can anyone know that the photo is a deepfake? I would not criticize Carl Turner for posting a photo that looks very real, ”wrote Darren Jones, head of the parliamentary committee on business and commerce.
And in response to criticism from Science Minister Michelle Donelan, he inquired about what her department is doing to detect deepfakes in the face of the upcoming elections.
Systems approach
Meanwhile, the Ministry of Science published a report earlier this year outlining the basic principles for controlling the development of new technologies (rather than private bans on specific products).
After its release, Prime Minister Sunak himself changed his rhetoric from glorifying AI as a source of new opportunities to warning that the development of AI needs to be provided with a “railing”.
Meanwhile, the most powerful AI corporations have already agreed to create a whole host of new limits on AI. In particular, following a meeting with US President Joe Biden, representatives of Amazon, Google, Meta, Microsoft and OpenAI decided to automatically watermark any visual and audio content created using their developments.
“Labeling AI content is the right idea, but whether it will work to the fullest is another question,” says Dmitry Gvozdev, CEO of Information Technologies of the Future. – First, watermarks are very difficult to remove, but relatively easy to add. And, accordingly, any truthful compromising evidence that surfaced during the upper phase of the political season can be disguised as a deepfake. And, as you know, exposing fakes and refuting lies requires many times more time and effort than their creation. Secondly, even if the most advanced developers of generative AI accept the necessary restrictions, it is not at all certain that everyone else will do it. We have already seen distinctly malicious AI used for phishing – someone has found opportunities to train a language model of criminal activity, without any of the restrictions that OpenAI and others provide their developments with. I guess this is just the beginning. I would not be surprised if the use of “black AI” comes into use among black PR people and unscrupulous political consultants. From a technical point of view, it’s just a matter of computing power.”
The expert added that the widespread use of deepfakes could eventually completely discredit photo and video content in its current form. But it can also spur the development of new types of storage media – more high-tech and protected from attempts to falsify the information contained in them.
California bans pre-election deepfakes with politicians
California Gov. Gavin Newson has signed a bill to regulate deepfakes. It prohibits publicly posting fake videos of political candidates for 60 days before the start of an election. It is reported by Engadget.
In just a couple of years, video face swapping technologies have moved from rather primitive prototypes (we are talking primarily about the Reddit user deepfakes, who in 2017 taught the neural network to overlay frames on the faces of actresses in porn videos) to high-quality algorithms, the results of which are becoming more and more recognized. more difficult. Of course, the efficiency of such algorithms increases the likelihood that deepfakes will be used for harm, which is especially dangerous for public people like politicians, since a large number of videos with their participation are often available on the Internet that can be used for training sample.
The signed bill is the first known law restricting the distribution of deepfakes on the Internet. The bill restricts the distribution of deepfakes involving political candidates in the form of audio, video and photos for 60 days before the election. The law will apply both to materials in which the faces and voices of politicians are substituted, and to materials in which other people’s faces and speech are superimposed on the images of politicians. At the same time, the bill specifies that frames or videos with politicians that use materials involving other people must be accompanied by a warning. The law will be in effect until January 1, 2023.
It is still unclear how exactly the substitution in such materials will be determined – automatically or not – but large IT companies are already creating their own datasets for training algorithms that determine the substitution of faces in videos. So, datasets to combat deepfakes have already been collected by Google, and Facebook and Microsoft, in addition to the dataset, have announced the start of a competition for developers.
“Sfera”, 06/15/2020, “Fake Future: International Experience of (Un)regulating Deepfakes”: In most countries of the world, there is no clear regulation of deepfakes, although their danger is widely spoken about. A “positive” example is China, where from January 1, 2020, deepfakes must be properly labeled, and this rule applies to both producers and the online platform where this video or audio recording will be published. How exactly the authorities will distinguish real videos from high-quality fakes is not specified, but violators face criminal liability.
Such measures were not taken without reason. Deepfakes can not only undermine the reputation of anyone – be it a famous or average person – but also threaten national institutions or even national security, because political leaders are not protected from deepfakes. So, in the 2019 elections, British politician Boris Johnson invited voters to support his rival in the fight for the post of prime minister of the country, Jeremy Corbyn. Corbyn, in turn, invited the audience to support Johnson. The two videos are indistinguishable from the real ones – facial expressions and voices exactly match Johnson and Corbin. However, in fact, these are fakes created by the organization Future Advocacy to show the impact of new technologies on democratic processes. — Inset K.ru
Distortions are not accepted: deepfake will be legally defined in the Russian Federation
The official concept of “deepfake” will appear in the Russian legal field. A bill on this is planned to be submitted to the lower house in the autumn session, Izvestia was told in the State Duma Committee on Information Policy. […]
neurohazard
Oleg Matveychev, deputy chairman of the committee, explained: today people are increasingly using the neural network primarily for entertainment, placing photos in interesting scenery to see themselves or loved ones in a new way. However, within 3-5 years, this mechanism will grow, and fraudsters will join its use. […]
The politician also predicts the risks of discrediting: attackers can use photographs of a person, “attributing” to him on the video a bribe, or simply add his image to a frank publication. According to him, it is supposed not only to give a legal definition of such a phenomenon, but also to think over regulatory mechanisms: how to track who is responsible, because “one cannot sue a neural network,” he added. Against the backdrop of the urgency of the problem, the document is being prepared for submission to the State Duma already in the autumn session this year, the parliamentarian specified.
Earlier, Izvestia already wrote about such an initiative, when the LDPR faction proposed to introduce criminal penalties for deepfakes in financial fraud using the substitution of a person’s voice or image. On June 4, it became known that the government of the Russian Federation did not support the bill, citing the fact that the illegal distribution of a citizen’s personal data, including the use of artificial intelligence, falls under Art. 137 of the Criminal Code of the Russian Federation (“Violation of privacy”).
The Reality of the Threat
Such networks consist of two parts: while one of them generates incoming materials from the user, the second filters out parts of the downloaded content that are not similar to the original data, explained data processing specialist Dmitry Kaplun. As a result, this leads to sufficiently high-quality images: because the objects were not added to each other separately, but were created together as a single picture based on the data presented. […]
“Even experts have a hard time distinguishing a real shot from a deepfake. One of the ways remains their “flaw” – poor reflection transmission, for example, in mirrors, windows, or even in the pupil of the eye. They automatically “idealize the picture”, even removing red eyes, and in real photographs it’s all there. You can also evaluate by quantitative criteria, if you process the photos in an appropriate way, you will be able to calculate their metrics, and then we will see the difference, but soon their number will no longer differ. This is a laborious process, – Dmitry Kaplun explained to Izvestia. […]
Source