Fake OpenAI Repository on Hugging Face Distributes Infostealer Malware
A deceptive repository on the Hugging Face platform, masquerading as OpenAI’s “Privacy Filter” project, was found to be distributing information-stealing malware to Windows users.
The repository managed to climb to the top spot on Hugging Face’s trending list, amassing a staggering 244,000 downloads before it was promptly removed following reports of malicious activity.
Hugging Face serves as a platform for developers and researchers to share AI models, datasets, and machine learning tools. These models, which are pre-trained AI systems, include weight files, configurations, and code.
Researchers at HiddenLayer, a company specializing in safeguarding AI and ML models against attacks, uncovered this malicious campaign on May 7. They identified a fraudulent repository named Open-OSS/privacy-filter.
The researchers found that the repository had mimicked OpenAI’s authentic Privacy Filter release, replicating its model card almost identically. It also included a loader.py file that fetched and executed infostealer malware on Windows devices.
Instructions from the malicious repository Source: HiddenLayer
The ‘loader.py’ Python script contained fake AI-related code to camouflage its true nature. However, in reality, it disabled SSL verification, decoded a base64 URL leading to an external resource, and fetched a JSON payload that executed a PowerShell command.
This PowerShell command ran discreetly, downloading a batch file (start.bat) responsible for privilege escalation, fetching the final payload (sefirah), adding it to Microsoft Defender’s exclusions list, and executing it.
The final payload was an infostealer built on Rust, designed to target sensitive data including browser information, Discord tokens, cryptocurrency wallets, SSH credentials, and system files. The stolen data was then compressed and sent to a command-and-control server at recargapopular[.]com.
HiddenLayer highlighted the malware’s sophisticated anti-analysis features, which actively evade virtual machines, sandboxes, debuggers, and other analysis tools.
The exact number of victims impacted by this incident remains uncertain. The researchers noted that the majority of the 667 accounts that liked the malicious repository on Hugging Face appeared to be automated. Furthermore, the high download count of 244,000 may have been artificially inflated.
Further investigation by the researchers revealed other repositories utilizing the same malicious loader infrastructure. They also observed similarities with an npm typosquatting campaign distributing the WinOS 4.0 implant.
Users who downloaded files from the deceptive repository are urged to reimage their machines, update all stored credentials, replace cryptocurrency wallets and seed phrases, and invalidate browser sessions and tokens.
It’s worth noting that threat actors have previously exploited Hugging Face to host malicious models, despite the platform’s security measures.
An AI exploit that combined four zero-day vulnerabilities bypassed both renderer and OS sandboxes, signaling a wave of new exploits on the horizon.
Discover how autonomous, context-rich validation techniques at the Autonomous Validation Summit (May 12 & 14) can identify vulnerabilities, verify controls, and facilitate remediation.