Grimaces of black PR. Photo of British Prime Minister Sunak with beer “in the wrong glass”, or how deepfakes threaten politicians
A fake photo of British Prime Minister Rishi Sunak pouring beer “into the wrong glass” has caused outrage among the Conservative Party.
And while it’s not clear if the original photo was AI-edited, there’s been a renewed debate about cracking down on deepfakes and labeling AI-generated images.
Threats to Democratic Processes
British artificial intelligence experts have said that images of politicians generated or edited using generative AI like Midjourney or Stable Diffusion “pose a threat to democratic processes.”
The claim comes in the wake of the recent release of an edited photo of British Prime Minister Rishi Sunak at the London Beer Festival from behind a bar holding a cheap glass underfilled with Black Dub strong beer. A woman stands next to him and looks at Sunak with a mocking expression.
Very soon it turned out that the photo had been heavily edited: in fact, the glass was branded, the beer was filled to the brim, and the expression on the face of the woman standing next to him was completely neutral.
However, the edited image was first posted on social media by Labor MP Karl Turner and later by other members of the party.
Conservatives, in turn, exploded in indignation. Deputy Prime Minister Oliver Dowden called the image unacceptable and urged members of the Labor Party to remove the fake photo because it is “misleading”.
A future that few will like
Experts warn that the current situation is an indicator of what could happen in next year’s election race, and that while it’s not entirely clear whether the prime minister’s photo was used, such programs have made it faster and easier to produce fake ( and at the same time convincing) texts, photos and audio.
University of Southampton computer science professor Wendy Hall said that the use of digital technologies in general and AI in particular is a threat to democratic processes, and urgent measures should be taken to minimize the risks in view of the imminent US elections in the UK.
Shweta Singh, assistant professor of information systems and management at the University of Warwick, in turn, stated the need to establish a set of ethical principles that will allow people to understand that they see trustworthy information in the news, and not fakes.
She was echoed by Professor Faten Ghosn, head of the Department of Public Administration at the University of Essex, who noted that it is the duty of politicians to inform voters about the use of edited images. That is, simply put, to mark images altered with the help of digital technologies so that there are no illusions that they are real.
A bill based on the same idea has already been put forward in the US.
The Labor politicians seem to have decided to insist that nothing terrible is happening.
“The main question is: how can anyone know that the photo is a deepfake? I would not criticize Carl Turner for posting a photo that looks very real, ”wrote Darren Jones, head of the parliamentary committee on business and commerce.
And in response to criticism from Science Minister Michelle Donelan, he inquired about what her department is doing to detect deepfakes in the face of the upcoming elections.
Meanwhile, the Ministry of Science published a report earlier this year outlining the basic principles for controlling the development of new technologies (rather than private bans on specific products).
After its release, Prime Minister Sunak himself changed his rhetoric from glorifying AI as a source of new opportunities to warning that the development of AI needs to be provided with a “railing”.
Meanwhile, the most powerful AI corporations have already agreed to create a whole host of new limits on AI. In particular, following a meeting with US President Joe Biden, representatives of Amazon, Google, Meta, Microsoft and OpenAI decided to automatically watermark any visual and audio content created using their developments.
“Labeling AI content is the right idea, but whether it will work to the fullest is another question,” says Dmitry Gvozdev, CEO of Information Technologies of the Future. – First, watermarks are very difficult to remove, but relatively easy to add. And, accordingly, any truthful compromising evidence that surfaced during the upper phase of the political season can be disguised as a deepfake. And, as you know, exposing fakes and refuting lies requires many times more time and effort than their creation. Secondly, even if the most advanced developers of generative AI accept the necessary restrictions, it is not at all certain that everyone else will do it. We have already seen distinctly malicious AI used for phishing – someone has found opportunities to train a language model of criminal activity, without any of the restrictions that OpenAI and others provide their developments with. I guess this is just the beginning. I would not be surprised if the use of “black AI” comes into use among black PR people and unscrupulous political consultants. From a technical point of view, it’s just a matter of computing power.”
The expert added that the widespread use of deepfakes could eventually completely discredit photo and video content in its current form. But it can also spur the development of new types of storage media – more high-tech and protected from attempts to falsify the information contained in them.