AI-Generated Malware Emerges in Targeted Cyber Attacks

Lore Apostol

[ad_1]

  • AI-written malicious code was observed being used by threat actors in targeted attacks.
  • These were discovered as part of a RAT-delivering email campaign active in France.
  • This new approach could enable the rise of lower-level cybercriminals in the cybersecurity landscape.

Cybercriminals are advancing their tactics by deploying AI-generated malware in targeted attacks, notably against users in France. Recent reports reveal that generative artificial intelligence (AI) technology, while beneficial, is being harnessed by malicious actors to create sophisticated and convincing threats.

Researchers discovered that generative AI services were likely used to craft the malicious code in an email campaign delivering the publicly available remote access trojan AsyncRAT. The campaign utilized HTML smuggling to distribute a password-protected ZIP archive containing harmful scripts.

Evidence pointing to AI-generated code includes well-structured scripts, comprehensive comments explaining each line, and the use of native language for function names and variables—traits uncommon in human-developed malware due to the intent to obscure their workings.

AI Created Malware

AI Created Malware
Image Source: HP

The AsyncRAT malware enables remote monitoring and control, logging keystrokes and providing encrypted connections to compromised machines. This allows attackers to deploy additional payloads, increasing the threat level.

According to HP Wolf Security's Q2 2024 ‘Threat Insights' report, there is a notable rise in less technically skilled cybercriminals utilizing AI to develop malware. This trend is accelerating the production of malware customized for different regions and platforms (Linux, macOS).

Archives were identified as the most popular delivery method for malware in the first half of the year, highlighting a shift in distribution tactics. The integration of AI in crafting malware signifies an escalation in cyber threats, potentially lowering the barrier for entry for less skilled threat actors.

Generative AI tools were also used by scammers, as these enable them to work in many languages using proper grammar to send malware-ridden messages. One such example is phishing campaigns on the travel website Booking.com, where crooks sent messages to hosts and even created fake properties.

[ad_2]

Written by ODD Balls

Be the first to comment

Leave a Reply

Your email address will not be published.


*