Hackers are using LLMs to build the next generation of phishing attacks - here's what to look out for
  • Unit 42 warns GenAI enables dynamic, personalized phishing websites
  • LLMs generate unique JavaScript payloads, evading traditional detection methods
  • Researchers urge stronger guardrails, phishing prevention, and restricted workplace LLM use

When Generative Artificial Intelligence (GenAI) first emerged, early opinion makers were discussing dynamic websites - sites that are not designed upfront and unveiled, but were rather generated on the spot, for the visitor, depending on their location, keywords used, browsing habits, device used, intent, and so on.

The age of static websites was apparently almost over, and that in no-time, the content we’ll see on the internet will be unique and tailored solely for us.

While that dream still hasn’t materialized, the pioneers of this approach will most likely be - cybercriminals.

Not exactly theoretical

Security researchers from Palo Alto Networks’ Unit 42 arm have found the technique can be easily used in phishing.

In short, here is how it would work:

A victim would be phished to visit a seemingly benign webpage. It contains no visible malicious code, but once loaded, it sends carefully crafted prompts to a legitimate LLM API. The LLM returns JavaScript code (which is unique and different for every user), which is then assembled and executed directly in the browser.

As a result, the victims are presented with a fully functional, personalized phishing page, generated with no static payload delivered over the network which the researchers could intercept and analyze.

While the method is mostly a proof-of-concept today, it’s not purely hypothetical, either. Unit 42 did not say it observed such an attack in the wild, but hinted that the building blocks are being used.

LLMs are already generating obfuscated JavaScript, albeit offline; runtime use on compromised machines is everywhere; LLM-assisted malware, ransomware, and cyber-espionage campaigns are increasing in numbers every day.

Dynamically generated phishing pages are the future of scams, Unit 42 stressed, but added that detection is still possible through enhanced browser-based crawlers.

“Defenders should also restrict the use of unsanctioned LLM services at workplaces. While this is not a complete solution, it can serve as an important preventative measure,” they added.

“Finally, our work highlights the need for more robust safety guardrails in LLM platforms, as we demonstrated how careful prompt engineering can circumvent existing protections and enable malicious use.”

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Source: TechRadar