Connect with us

Tech News

Rogue AI: The Dangers of Self-Threatening Perimeter Defense Systems

Published

on

The end of perimeter defense: When your own AI tools become the threat actor

Russia’s APT28 Deploying LLM-Powered Malware Against Ukraine

Russia’s Advanced Persistent Threat 28 (APT28) is actively using LLM-powered malware to target Ukraine. This malicious software has been documented by Ukraine’s CERT-UA, with the first confirmed deployment being LAMEHUG. This malware, attributed to APT28, utilizes stolen Hugging Face API tokens to access AI models, enabling real-time attacks while distracting victims with irrelevant content.

Cato Networks’ researcher, Vitaly Simonovich, highlighted the widespread use of this attack technique by APT28 to probe Ukrainian cyber defenses. He emphasized the similarities between the threats faced by Ukraine and those encountered by enterprises globally.

Simonovich demonstrated how any enterprise AI tool can be quickly transformed into a malware development platform in under six hours. By converting popular AI models like OpenAI’s ChatGPT-4o and Microsoft Copilot, he showcased how these tools can be turned into functional password stealers, bypassing existing safety controls.

The Rise of AI-Powered Malware

The convergence of nation-state actors employing AI-powered malware and the vulnerability of enterprise AI tools has become a growing concern. The 2025 Cato CTRL Threat Report highlights the rapid adoption of AI across enterprises, with significant increases in the use of various AI platforms such as Claude, Perplexity, Gemini, ChatGPT, and Copilot.

APT28’s LAMEHUG: The New Face of AI Warfare

LAMEHUG, attributed to APT28, operates efficiently by using phishing emails impersonating Ukrainian officials to deliver malware. Once executed, the malware connects to the Hugging Face API to query AI models for malicious activities while distracting victims with legitimate-looking documents and AI-generated images.

Simonovich emphasized that Ukraine serves as a testing ground for cyber weapons, with LAMEHUG being a significant example of AI-powered attacks in the wild.

See also  Unbeatable Early Black Friday Offer: Samsung Galaxy Tab A11 for Just £99!

The Immersive World Technique: A Six-Hour Path to Malware Development

Simonovich’s demonstration revealed the ease with which consumer AI tools can be turned into malware factories using the Immersive World technique. By guiding the AI through narrative engineering, he successfully created a Chrome password stealer in just six hours, highlighting the lack of robust safety controls in LLMs.

The Malware-as-a-Service Economy

Underground platforms like Xanthrox AI offer unrestricted AI capabilities for a monthly fee, enabling users to access AI tools without safety controls. Simonovich’s research also uncovered Nytheon AI, which provides uncensored AI models optimized for malicious activities.

Enterprise AI Adoption and Security Challenges

The rapid adoption of AI across various industries has led to an expanding attack surface, with security leaders facing new threats that leverage AI-powered attacks. Simonovich emphasized the inconsistent responses from major AI companies when informed about potential vulnerabilities in their AI models, highlighting a lack of security readiness.

The Accessibility of Nation-State Attacks

The deployment of APT28’s LAMEHUG against Ukraine underscores the accessibility of nation-state attacks using AI-powered tools. Simonovich’s research demonstrates that with creativity and a minimal investment, any organization can execute sophisticated attacks using AI tools initially deployed for productivity.

It is essential for enterprises to be aware of the dual-use nature of AI tools and the potential security risks associated with their deployment.

Trending