
Years ago, OpenAI predicted the potential for misuse of generative AI systems, such as their model, ChatGPT. These AI models could be harnessed for nefarious purposes, a prophecy that is becoming a reality as we witness the rise of malicious AI chatbots like WormGPT and FraudGPT. The latter is designed to automate hacking and data theft, making it easier for cybercriminals to carry out their malicious activities.
Generative AI, a term used to describe AI systems built on transformer models, has seen rapid evolution in recent years. With OpenAI taking the lead, the tech industry, including ill-intentioned individuals and groups, is rushing to exploit the capabilities of this new technology.
The Threat of FraudGPT
In the past few days, FraudGPT was introduced on various hacking forums, generating significant concern among cybersecurity professionals. The unidentified developer behind this malicious chatbot boasts about its capabilities to revolutionize online fraudulent activities. To use FraudGPT, hackers simply need to inform the AI about their requirements, for example, a text designed to lure customers into clicking on a harmful spam SMS.
Unlike Google and other AI giants that have implemented guardrails to prevent their models from generating malicious code, FraudGPT boasts of such a capability. While no actual samples have been provided in the forum posts, the assertion isn’t far-fetched given what legitimate generative AI platforms can do.
Moreover, FraudGPT is alleged to deal with stolen data, which could be integrated into the model for more targeted attacks. It can scan websites to find those most vulnerable to infiltration, thus making it a potent tool in the hands of cybercriminals.
This tool is not a one-time purchase but a subscription service, implying that its functionality could evolve. To gain access to this malware-generating bot, interested parties must part with $200 per month, a steep increase compared to WormGPT’s monthly fee of $60. As per the FraudGPT developer, more than 3,000 sales have already been made, signaling a potential increase in sophisticated scam messages.
The Menace of FraudGPT: What Research Tells Us
According to Rakesh Krishnan, a senior threat analyst at cybersecurity firm Netenrich, FraudGPT has been spreading on Telegram Channels since July 22. Krishnan’s report reveals that the AI bot is intended solely for offensive purposes such as spear phishing email creation, tool creation, carding, and more.
FraudGPT is available for subscription, priced at $200 per month or $1,700 per year. A significant portion of Netenrich’s report highlights how FraudGPT can be employed for business email compromise (BEC) attacks. It allows an attacker to craft emails that increase the likelihood of the recipient clicking on a harmful link.
But that’s not all. Krishnan states that FraudGPT could simplify the creation of hacking tools, undetectable malware, harmful code, leaks, and exploit vulnerabilities in businesses’ technological systems. It could also educate would-be criminals on coding and hacking. It seems that the chatbot service might incorporate stolen credit card numbers and other hacker-obtained data, with the developer providing instructions on how to conduct fraud.
How to Protect Yourself from FraudGPT
Despite the numerous benefits AI advancements offer, they also open up fresh avenues for attacks. Thus, robust prevention measures are necessary. Here are some strategies:
- BEC-Specific Training: To counter AI-supported BEC attacks, organizations should create comprehensive, frequently updated training programs. Employees should be educated on the nature of BEC threats, how AI could be used to enhance them, and the tactics used by attackers. This training should be a part of employees’ ongoing professional development.
- Enhanced Email Verification Measures: Organizations should implement stringent email verification policies to protect themselves from AI-driven BEC attacks. This includes setting up email systems that flag communications containing specific words associated with BEC attacks, such as “urgent,” “sensitive,” or “wire transfer.” Measures should also be in place to detect when external emails resemble those of internal executives or vendors. This ensures potentially damaging emails are scrutinized before any action is taken.
To sum up, while generative AI holds immense promise, its misuse by cybercriminals presents a significant threat. Society at large must remain vigilant against potential abuses like FraudGPT, and businesses must adopt stringent measures to guard against potential attacks. For more information on cyber threats and prevention strategies, visit Netenrich.