The growing role of generative AI in cyberattacks: a new threat landscape

As cybercriminals turn to generative AI to intensify their attacks on unsuspecting victims, what can we do to protect ourselves against these new threats?
  • Is ChatGPT a new weapon in the hands of cybercriminals?
  • Say hello to ChatGPT’s evil twin, WormGPT
  • How to protect yourself against AI-generated cyberattacks
  • Perception Point’s new AI model offers a new line of defence against AI-based cyberattacks
  • Cybercriminals beware, DarkBERT is here

The field of artificial intelligence (AI) has experienced remarkable growth in recent years, fueled in large part by the emergence of generative AI. At its core, this groundbreaking technology involves training powerful machine learning models to autonomously produce creative and strikingly authentic outputs, which can span a broad range of content, from written text and visual imagery to audios that sound convincingly human. The most impressive feature of generative AI lies in its ability to learn from vast troves of existing data, interpret intricate patterns, and then simulate those patterns in the creation of new, original content. This level of machine-driven creativity is nothing short of revolutionary, representing a paradigm shift in how we view the boundaries of artificial intelligence.

In a relatively short amount of time, generative AI found a diverse and expansive set of applications in many different fields. These include the generation of digital content, where AI is used to create articles, social media posts, and more; the art domain, where AI-produced artworks challenge the traditional understanding of creativity; and the entertainment industry, where AI can contribute to film production, music creation, and game design. However, as with any other transformative technology, generative AI has also found more nefarious applications in the hands of cybercriminals, who are increasingly exploiting the creative abilities of generative AI to orchestrate attacks against unsuspecting targets. These can range from creating convincing fake digital identities for fraudulent purposes to generating highly sophisticated phishing emails designed to steal sensitive information. As we continue to explore and celebrate the transformative power of generative AI, we must simultaneously remain alert to the potential threats it poses.

“Generative AI allows cybercriminals to produce error-free, authentic-looking emails that avoid established patterns, thus making the attack increasingly difficult to identify”.

Dan Shiebler, the Machine Learning team lead at Abnormal Security

Is ChatGPT a new weapon in the hands of cybercriminals?

A recent study conducted by Abnormal Security, a leading email security platform, revealed an alarming surge in threat actors’ use of generative AI tools like ChatGPT to execute sophisticated attacks, including vendor fraud, business email compromise (BEC), and credential phishing. Previously, recognising phishing attempts was as simple as spotting spelling and grammar errors in the email’s text. However, generative AI has changed the game due to its ability to simulate genuine communication with unsettling accuracy, making it significantly harder to distinguish genuine emails from scams. Dan Shiebler, the Machine Learning team lead at Abnormal Security, points out that generative AI allows cybercriminals to produce error-free, authentic-looking emails that avoid established patterns, thus making the attack increasingly difficult to identify.

The research also highlighted a shift in focus among threat actors from standard BEC attacks to vendor impersonation attacks, also known as vendor email compromise (VEC) attacks. These attacks leverage the trust between customers and vendors and use sophisticated social engineering techniques to trick the target. Generative AI also allows offenders to personalise their attacks by incorporating aspects of their target’s digital footprint into the emails, further enhancing the deception. ChatGPT could even offer a gateway for less-skilled criminals to launch cyberattacks. “As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all”, warns a recent report by cybersecurity firm Check Point Research. “Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad”.

Say hello to ChatGPT’s evil twin, WormGPT

In an attempt to combat the rampant abuse of its platform, OpenAI implemented more stringent barriers and restrictions within the ChatGPT user interface, which are designed to prevent the model from generating harmful content, such as phishing emails or malware. However, despite this move, cybercriminals are still finding ways to circumvent ChatGPT’s protective measures. According to Check Point Research, underground forums are abuzz with discussions on ways to exploit OpenAI’s API to create malicious content. The most common technique involves the creation of Telegram bots that use said API, which currently lacks adequate anti-abuse measures. Another alarming trend among cybercriminals involves devising ‘jailbreaks’ for interfaces like ChatGPT, which are basically specially constructed prompts designed to trick ChatGPT into generating output that could be used to manipulate a potential target into revealing sensitive information or executing malicious code.

An investigation by the cybersecurity company SlashNext has revealed a new cybercrime tool named WormGPT, which is being advertised in the underground scene as “a blackhat alternative to GPT models”. Described as a launching pad for advanced phishing and business email compromise (BEC) attacks, WormGPT is equipped with unlimited character support, chat memory retention, and code formatting capabilities. The tool’s developer claims that it’s been trained on a variety of data, with a particular focus on malware-related datasets, although the specific datasets remain confidential. To put its capabilities to the test, the researchers conducted an experiment in which WormGPT crafted an alarmingly persuasive email pressuring an unsuspecting account manager to pay a fraudulent invoice. According to the developer, more than 1,500 users have already paid for access to WormGPT, which is available through a subscription model. Security researchers have also unearthed FraudGPT, another malicious chatbot advertised on hacking forums. Similar to WormGPT, FraudGPT can generate SMS phishing messages and even provide information on optimal sites for credit card fraud.

How to protect yourself against AI-generated cyberattacks

The emergence of sophisticated, AI-driven tools like WormGPT underlines the ongoing evolution of cyber threats and emphasises the need for individuals and organisations to bolster their defence systems with robust countermeasures and mitigating strategies. Multi-factor authentication (MFA), for instance, provides an essential security blanket, acting as a compound barrier against unauthorised entry. By demanding additional verification through separate mediums, such as a fingerprint or a unique text code, MFA adds a formidable hurdle for cybercriminals to overcome. Education also plays a pivotal role in a comprehensive security approach. Companies need to conduct regular security awareness training to keep employees updated on the emergence of new cybersecurity threats. This includes ensuring they understand the nuances of phishing and BEC attacks, which can be devastatingly subtle, often appearing as legitimate requests from trusted sources. Advanced email filtering systems are also a must-have. These sophisticated tools can identify and block suspicious emails using complex algorithms and heuristic analysis, ensuring that potentially dangerous communication is stopped before it even reaches an inbox.

To foster a proactive approach to cybersecurity, organisations need to promote a culture of scepticism. Employees should be urged to double-check and verify requests for sensitive information through separate communication channels before complying. This acts as an additional, human layer of defence against clever social engineering tactics. Regular software and system updates should not be overlooked, either. These updates often contain vital security patches designed to fix known vulnerabilities, and postponing them can leave doorways open for cybercriminals to exploit. With tools like WormGPT capable of generating highly convincing fake emails, traditional security measures may prove inadequate, so it becomes increasingly critical for cybersecurity experts to develop advanced detection mechanisms capable of detecting the tell-tale signs of AI-generated content. The path to thwarting AI-driven cybercrime required a collaborative effort. Industry professionals, researchers, and policymakers must come together to better understand and address the growing threat landscape, and drive the development of effective solutions to combat these increasingly sophisticated threats.

“Legacy email security solutions which rely on signatures and reputation analysis struggle to stop even the most basic payload-less BEC attacks”.

Tal Zamir, CTO of Perception Point

Perception Point’s new AI model offers a new line of defence against AI-based cyberattacks

In response to the recent increase in the number of cyberattacks that leverage the power of generative AI, the cybersecurity provider Perception Point recently unveiled a cutting-edge detection technology that uses large language models (LLMs) in combination with deep learning architecture to identify AI-based business email compromise (BEC) attacks and counter them. According to the company, the new technology can help address some of the main limitations of conventional email security systems, which rely on contextual and behavioural detection to identify potential threats. However, since generative AI-based attacks lack conventional patterns these systems have been trained to recognise, they often go undetected. “Legacy email security solutions which rely on signatures and reputation analysis struggle to stop even the most basic payload-less BEC attacks”, explains Tal Zamir, CTO of Perception Point. “Our new model’s key strength lies in recognising the repetition of identifiable patterns in LLM-generated text. The model uses a unique three-phase architecture that detects BEC at the highest detection rates and minimises false positives”.

Another major issue with existing solutions is that they depend heavily on post-delivery detection, leaving harmful emails lurking in the user’s inbox for a considerable time before removing them. On the other hand, Perception Point’s new solution takes a more proactive approach, scanning each and every email before it actually lands in the user’s inbox and quarantining those that are deemed suspicious. This helps minimise the risk and potential damage related to detection techniques that act only after the system has been compromised. According to the company, it takes only 0.06 seconds on average for the solution to process an incoming email. To make this possible, the company trained the model on hundreds of thousands of malicious samples it collected over the years, while regular updates ensure it remains effective against emerging threats. To limit false positives that may arise from legitimate emails generated by AI, the model uses a unique three-phase structure. After initial scoring, transformers and clustering algorithms are used to categorise email content. Combining the insights gained from these processes with additional data, such as the sender’s reputation and authentication protocol details, the model is able to determine whether an email is AI-generated and assess its potential threat level.

Cybercriminals beware, DarkBERT is here

Large language models (LLMs) like ChatGPT and Bard, which have gained substantial traction lately, are typically trained on diverse, publicly accessible data, such as webpages, academic papers, and books. However, researchers at the Korea Advanced Institute of Science and Technology (KAIST) in collaboration with data intelligence company S2W decided to take a slightly different approach with their new AI model named DarkBERT. What makes DarkBERT different from conventional LLMs is that it was trained only on data obtained from the dark web via the Tor network. The training process, which took place over 16 days, involved using two different data sets, with information such as victim identities and leaked data details carefully redacted. The researchers also took further steps to ensure the ethical handling of sensitive information, employing strategies like deduplication, data filtering, and pre-processing. Based on the RoBERTa algorithm, which was originally introduced by Facebook researchers in 2019, DarkBERT surpasses its predecessor’s capabilities by effectively deciphering the complex environment of the dark web, enabling it to better understand its intricate nuances and delve deep into hidden layers this part of the internet is known for. But don’t let its ominous name fool you — DarkBERT was actually designed to fight cybercrime. With the dark web as its training ground, it outperforms traditional LLMs in cybersecurity and CTI applications. Among other things, the model can be used to trace the origins of ransomware leaks and identify suspicious websites, offering a new tool to security investigators in their fight against cybercrime.

In closing

The innovative capabilities of generative artificial intelligence, once celebrated for creating a diverse array of original content, are now increasingly being harnessed by cybercriminals. Thanks to its uncanny ability to mimic genuine human communication, generative AI is increasingly being used to perform a wide range of harmful activities, such as scams and phishing attacks. Even more concerning, a new breed of AI tool, trained on harmful datasets, has surfaced in obscure internet spaces, enabling nefarious actors to orchestrate sophisticated attacks, raising serious cybersecurity issues.

In response to these threats, new defensive solutions are being explored, which utilise the power of AI to identify and counter AI-generated cyber threats. This innovative approach focuses on discerning repeated patterns in AI-generated texts, which may be indicative of fraudulent activities. Furthermore, AI models trained specifically on data from the dark corners of the internet are being developed. These models, capable of tracing cyber threats back to their origins and identifying suspicious online activities, are opening up new avenues in the fight against cybercrime. As we face increasingly sophisticated AI-driven threats, it’s vital for individuals and organisations to strengthen their defensive systems, stay abreast of emerging threats, and encourage a culture of scepticism. Only then will we be able to keep the threat at bay.

Schedule your free, inspiring session with our expert futurists.

Continue

Related updates

This site is registered on wpml.org as a development site.