Tech News
Unveiling the Vulnerabilities: Microsoft Copilot Fails to Enforce Sensitivity Labels Twice, Evading DLP Detection
For a month beginning on January 21, Microsoft’s Copilot system had a security breach where it read and summarized confidential emails despite being restricted from doing so by sensitivity labels and DLP policies. This breach affected organizations such as the U.K.’s National Health Service and was logged as INC46740412. Microsoft internally tracked it as CW1226324. This breach was uncovered by BleepingComputer on February 18 and was the second incident in less than a year where Copilot breached its own trust boundaries.
In a previous incident in June 2025, Microsoft patched a critical zero-click vulnerability known as EchoLeak (CVE-2025-32711) that allowed a malicious email to bypass Copilot’s security measures and exfiltrate enterprise data without user interaction. This incident, along with the recent breach, highlighted a fundamental flaw in Copilot’s design that allowed it to access restricted data.
The root causes of both incidents were a code error and a sophisticated exploit chain that led to Copilot processing data it should not have had access to. The security tools in place, such as EDR and WAF, were blind to these breaches because they were not designed to detect violations within the retrieval pipeline.
To address these security gaps, organizations are advised to conduct regular tests to ensure Copilot is honoring sensitivity labels, block external content from reaching Copilot, audit logs for anomalous interactions, enable Restricted Content Discovery for sensitive data, and create incident response playbooks for vendor-hosted inference failures. These measures are crucial for organizations handling sensitive or regulated data to prevent future breaches.
The incident with Copilot serves as a cautionary tale for organizations deploying AI assistants, as 47% of CISOs have observed unintended or unauthorized behavior from AI agents. The structural risks present in Copilot’s design apply to other RAG-based assistants as well, emphasizing the need for thorough security measures and governance around AI deployments.
In conclusion, organizations must prioritize security controls and testing to prevent breaches like the one experienced by Microsoft. By implementing the recommended measures and staying vigilant, organizations can mitigate the risks associated with AI assistants accessing sensitive data.
-
Facebook4 months agoEU Takes Action Against Instagram and Facebook for Violating Illegal Content Rules
-
Facebook4 months agoWarning: Facebook Creators Face Monetization Loss for Stealing and Reposting Videos
-
Facebook4 months agoFacebook Compliance: ICE-tracking Page Removed After US Government Intervention
-
Facebook4 months agoInstaDub: Meta’s AI Translation Tool for Instagram Videos
-
Facebook2 months agoFacebook’s New Look: A Blend of Instagram’s Style
-
Facebook2 months agoFacebook and Instagram to Reduce Personalized Ads for European Users
-
Facebook3 months agoReclaim Your Account: Facebook and Instagram Launch New Hub for Account Recovery
-
Apple4 months agoMeta discontinues Messenger apps for Windows and macOS

