TeamPCP Hackers Threaten to Sell Mistral AI Source Code Repositories
The notorious TeamPCP hacker group has issued a warning, stating that they will disclose source code from the esteemed Mistral AI project unless a potential buyer emerges.
According to a post on a renowned hacker forum, the threat actors are demanding a hefty sum of $25,000 for a comprehensive collection of nearly 450 repositories.
Mistral AI, a prominent French artificial intelligence company founded by ex-researchers from Google’s DeepMind and Meta, is known for its provision of open-weight large language models (LLMs), both open source and proprietary.
Confirming the data breach, Mistral AI informed BleepingComputer that hackers infiltrated a codebase management system following the Mini Shai-Hulud software supply-chain attack.
The breach initiated with the compromise of official packages from TanStack and Mistral AI through stolen CI/CD credentials and legitimate workflows.
Subsequently, the breach extended to numerous other software projects on the npm and PyPI registries, including UiPath, Guardrails AI, and OpenSearch.
Addressing the situation, Mistral AI stated, “They [the hackers] contaminated some of our SDK packages for a brief period.”
TeamPCP claims to have obtained nearly 5 gigabytes of “internal repositories and source code” utilized by Mistral for training, fine-tuning, benchmarking, model delivery, and inference in experiments and future projects.
The hackers declared, “We are looking for $25k BIN or they can pay this and we will shred these permanently, only selling to the best offer and limited to one person, if we cannot find a buyer within a week we will leak all of these for free to the forums.”
Open to negotiations, the threat actor mentioned that the asking price is negotiable, and interested parties are encouraged to submit what they deem a fair offer for the 450 repositories on sale.
Mistral AI stated to BleepingComputer that TeamPCP managed to contaminate some of the company’s software development kit (SDK) packages.
In a recent advisory, Mistral AI disclosed that the breach occurred subsequent to a developer device being impacted by the TanStack supply-chain attack.
However, Mistral clarified that the forensic investigation determined that the affected data did not belong to the core code repositories.
Mistral reassured, “Neither our hosted services, managed user data, nor any of our research and testing environments were compromised.”
Additionally, OpenAI confirmed that the TanStack supply-chain affected systems of two of its employees who had access to “a limited subset of internal source code repositories.”
A small set of credentials was pilfered from the repositories, but no evidence suggested they were utilized in further attacks.
In response, OpenAI took measures to rotate the exposed code-signing certificates and cautioned macOS users to update their OpenAI desktop apps before June 12 to avoid potential issues.
TeamPCP hackers offering to sell Mistral AI data source: KELA
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.