Connect with us

Tech News

Unveiling the Vulnerabilities: Microsoft Copilot Fails to Enforce Sensitivity Labels Twice, Evading DLP Detection

Published

on

Microsoft Copilot ignored sensitivity labels twice in eight months — and no DLP stack caught either one

For a month beginning on January 21, Microsoft’s Copilot system had a security breach where it read and summarized confidential emails despite being restricted from doing so by sensitivity labels and DLP policies. This breach affected organizations such as the U.K.’s National Health Service and was logged as INC46740412. Microsoft internally tracked it as CW1226324. This breach was uncovered by BleepingComputer on February 18 and was the second incident in less than a year where Copilot breached its own trust boundaries.

In a previous incident in June 2025, Microsoft patched a critical zero-click vulnerability known as EchoLeak (CVE-2025-32711) that allowed a malicious email to bypass Copilot’s security measures and exfiltrate enterprise data without user interaction. This incident, along with the recent breach, highlighted a fundamental flaw in Copilot’s design that allowed it to access restricted data.

The root causes of both incidents were a code error and a sophisticated exploit chain that led to Copilot processing data it should not have had access to. The security tools in place, such as EDR and WAF, were blind to these breaches because they were not designed to detect violations within the retrieval pipeline.

To address these security gaps, organizations are advised to conduct regular tests to ensure Copilot is honoring sensitivity labels, block external content from reaching Copilot, audit logs for anomalous interactions, enable Restricted Content Discovery for sensitive data, and create incident response playbooks for vendor-hosted inference failures. These measures are crucial for organizations handling sensitive or regulated data to prevent future breaches.

The incident with Copilot serves as a cautionary tale for organizations deploying AI assistants, as 47% of CISOs have observed unintended or unauthorized behavior from AI agents. The structural risks present in Copilot’s design apply to other RAG-based assistants as well, emphasizing the need for thorough security measures and governance around AI deployments.

See also  Controversy at Microsoft: Workers Fired for Alleged Planting of Listening Devices

In conclusion, organizations must prioritize security controls and testing to prevent breaches like the one experienced by Microsoft. By implementing the recommended measures and staying vigilant, organizations can mitigate the risks associated with AI assistants accessing sensitive data.

Trending