Connect with us

Security

The Catastrophic Consequences of Anthropic’s AI in Malevolent Hands

Published

on

Anthropic’s most dangerous AI model just fell into the wrong hands

Reports from Bloomberg indicate that Anthropic’s Mythos AI model, designed for cybersecurity, has been breached by a small group of unauthorized users. This group, including a third-party contractor for Anthropic, managed to gain access to the powerful tool using a combination of the contractor’s privileges and common online investigation tools.

Anthropic’s Claude Mythos Preview, a versatile model capable of detecting and exploiting vulnerabilities in major operating systems and web browsers, has caught the attention of various tech giants like Nvidia, Google, Amazon, Apple, and Microsoft. Despite government interest, Anthropic has refrained from making the model publicly available due to concerns about potential misuse.

In response to the unauthorized access to Claude Mythos Preview, an Anthropic spokesperson stated that the company is investigating the breach, emphasizing that there is no evidence of impact on their systems beyond the third-party vendor’s environment.

The breach occurred on the same day Anthropic announced the limited release of Mythos for testing. The group responsible for the breach remains unidentified, with Bloomberg suggesting they are part of a Discord channel focused on uncovering unreleased AI models.

Utilizing information obtained from a recent data breach, the group accessed Mythos by making educated guesses about its location. Despite using the model for demonstrations and sharing screenshots with Bloomberg, they have avoided using it for cybersecurity purposes to evade detection by Anthropic. Bloomberg also reports unauthorized access to other Anthropic AI models by the same group.

See also  Cybersecurity Alert: Wi-Fi Vulnerabilities, npm Malware, DeFi Fraud, and More Threats Uncovered

Trending