TA547 Uses an LLM-Generated Dropper to Infect German Orgs - Exotic Digital Access
  • Kangundo Road, Nairobi, Kenya
  • support@exoticdigitalaccess.co.ke
  • Opening Time : 07 AM - 10 PM
TA547 Uses an LLM-Generated Dropper to Infect German Orgs

TA547 Uses an LLM-Generated Dropper to Infect German Orgs

Researchers from Proofpoint recently observed a malicious campaign targeting dozens of organizations across various industries in Germany. One part of the attack chain stood out in particular: an otherwise ordinary malware dropper whose code had clearly been generated by artificial intelligence (AI).

What the researchers discovered: Initial access broker (IAB) TA547 is using the AI-generated dropper in phishing attacks.

Though it may be a harbinger of more to come, it’s no cause for panic. Defending against malware is the same no matter who or what writes it, and AI malware isn’t likely to take over the world just yet.

“For the next few years, I don’t see malware coming out of LLMs being more sophisticated than something a human is going to be able to write,” says Daniel Blackford, senior manager of threat research at Proofpoint. After all, AI aside, “We’ve got very talented software engineers who are adversarially working against us.”

TA547’s AI Dropper

TA547 has a long history of financially motivated cyberattacks. It came to prominence trafficking Trickbot, but has cycled through handfuls of other popular cybercrime tools, including Gozi/Ursnif, Lumma stealer, NetSupport RAT, StealC, ZLoader, and more.

“We’re seeing — not just with TA547, but with other groups as well — much faster iteration through development cycles, adoption of other malware, trying new techniques to see what will stick,” Blackford explains. And TA547’s latest evolution seems to have been with AI.

Its attacks began with brief impersonation emails — for example, masquerading as the German retail company Metro AG. The emails contained password-protected ZIP files couching compressed LNK files. The LNK files, when executed, triggered a Powershell script that dropped the Rhadamanthys infostealer.

Sounds simple enough, but the Powershell script that dropped Rhadamanthys had one strange characteristic. Within the code, above each and every component, was a hashtag followed by hyper-specific comments about what the component achieved.

TA547 Uses an LLM-Generated Dropper to Infect German Orgs

As Proofpoint noted, this is characteristic of LLM-generated code, indicating that the group — or whoever originally wrote the dropper — used some sort of chatbot to write it.

Is Worse AI Malware to Come?

Like the rest of us, cyberattackers have been experimenting with how AI chatbots can help them achieve their goals more easily, expeditiously, and effectively.

Some have figured out little ways to use AI to enhance their day-to-day operations, for example by aiding research into targets and emerging vulnerabilities. But aside from proofs-of-concept and the odd novelty tool, there hasn’t been much evidence that hackers are writing useful malware with the help of AI.

That, Blackford says, is because humans are still far better than robots at writing malicious code. Plus, AI developers have taken steps to prevent the misuse of their software.

At least for now, he says, “the ways that these groups are going to leverage AI to scale up their operations is more of an interesting problem than the idea that they’re going to create some new super malware with it.”

And even once they do autogenerate super malware, the job of defending against it will remain the same. As Proofpoint concluded in its post, “In the same way LLM-generated phishing emails to conduct business email compromise (BEC) use the same characteristics of human-generated content and are caught by automated detections, malware or scripts that incorporate machine-generated code will still run the same way in a sandbox (or on a host), triggering the same automated defenses.”

Source link

Leave a Reply