Artificial intelligence for scammers
The original of this material
© Forbes.ru11.01.2023
How Cybercriminals Create Malware Using the ChatGPT Chatbot
Thomas Brewster
Underground forum users are starting to share malware created with OpenAI’s popular ChatGPT chatbot, and scammers on dating sites are planning to create bots impersonating real girls using this tool. Experts predict even more malicious use of ChatGPT
Cybersecurity researchers warn that Internet scammers have begun using ChatGPT, OpenAI’s AI-powered chatbot, to quickly create hacking tools. One expert monitoring underground forums also told Forbes USA that scammers are testing ChatGPT’s ability to create other chatbots designed to impersonate girls and trap victims.
Many early adopters of ChatGPT sounded the alarm that the app, which became popular just days after its launch in December, could create ransomware or malware that could spy on the user’s keystrokes.
Underground crime forums are on the rise, according to a report by Israeli security firm Check Point. In one forum post, a hacker who previously shared Android malware showed off code written by ChatGPT that would steal files of interest, compress them, and send them over the network. They demonstrated another tool that installed a backdoor (“secret entry” — a defect in the remote access algorithm) on a computer and could download additional malware to the infected PC.
On the same forum, another user shared a Python code that can encrypt files, claiming that the OpenAI app helped him create it. He claimed that it was the first script he had ever developed. As Check Point noted in its report, such code can be used for harmless purposes, but it can also be “easily modified to completely encrypt someone’s device without any user interaction,” similar to how ransomware works. As Check Point noted, the same forum user had previously sold access to hacked company servers and stolen data.
Another user also talked about the misuse of ChatGPT, when it was used to create the functions of a darknet marketplace – akin to online drug stores such as Silk Road or Alphabay. As an example, the user showed how a chatbot can be used to quickly create an application that tracks cryptocurrency prices for a theoretical payment system.
Alex Holden, founder of cyber-intelligence firm Hold Security, said he’s seen scammers on dating sites also start using ChatGPT in an attempt to create “real people.” “They are planning to create chatbots that will impersonate girls in order to get further in correspondence with their victims,” he said. “They are trying to automate empty talk.”
At the time of publication, OpenAI had not responded to a request for comment.
Although the tools created by ChatGPT looked fairly simple, Check Point believes it’s only a matter of time until more sophisticated hackers find a way to use AI to their advantage. Rick Ferguson, Vice President of Security and Threat Intelligence at US firm Forescout, said it doesn’t yet appear that ChatGPT is capable of creating anything as sophisticated as the ransomware “strains” that have been seen in major hacking incidents in recent years. years. An example would be Conti ransomware, infamous for its use in hacking Ireland’s national healthcare system. However, the OpenAI tool will allow newcomers to lower the barrier to entry into this illegal market by creating simpler yet equally effective malware, Ferguson added.
He also expressed concern that, instead of creating code that steals victims’ data, ChatGPT could be used to create websites and bots that trick users into sharing their information. “This could industrialize the creation and personalization of malicious web pages, highly targeted phishing campaigns and social engineering scams,” Ferguson added.
Sergei Shikevich, a researcher at Check Point’s threat intelligence team, told Forbes USA that ChatGPT would be a “great tool” for Russian hackers who don’t speak English to create believable phishing emails.
Regarding the protection against criminal use of ChatGPT, Shikevich said that ultimately and “unfortunately” this should be regulated. OpenAI has put in place some controls to prevent obvious ChatGPT requests to create spyware with policy alerts, however hackers and journalists found ways to get around these protections. Shikevich said companies like OpenAI might have to be legally forced to train their AI to detect such abuses.
3dnews.ru, 01/10/2023, “App Store and Play Market flooded with dubious applications pretending to be ChatGPT – there is no official one”: The AI-powered ChatGPT bot is one of the hottest topics in the tech industry today, and some not always decent developers have taken advantage of the trend by posting shady apps on the App Store and Play Store that mimic the popular bot. For an additional fee, the apps offer users “advanced features” and additional requests to the chatbot. It is important to remember that access to ChatGPT is free, and the OpenAI company that developed the bot has not released official applications. Charts in a number of countries, for example, an application called “ChatGPT Chat GPT AI With GPT-3” – it is free to install, but offers a subscription for $7.99 per week or $49.99 per month for an unlimited number of requests to the bot. In reality, the authors of the app reviews note that the subscription does not provide any advantages. The same developer already had a similar app with over 100,000 downloads, but it has now been removed.
The point of all this dubious activity is to release just about any app that has “ChatGPT” in its name to get it as high as possible in the search results – some even release multiple apps in the hope that at least one of them will hit a high enough number. downloads. Similarly, the App Store and Play Store have been flooded with fakes of the once-popular Wordle and Flappy Bird. And it is not yet clear whether Apple and Google are going to do something about the influx of dubious applications that parasitize on the popularity of ChatGPT. — Inset K.ru
Translation by Ksenia Lychagina