Tech News
Securing Supply Chains: The Imperative for AI-driven Visibility and Protection
Enhancing AI Security: The Urgent Need for Visibility and Governance
In the ever-evolving landscape of enterprise applications, a significant shift is underway. According to research, four out of every ten enterprise applications are projected to incorporate task-specific AI agents in the current year. Despite this rapid adoption, a mere 6% of organizations have a robust AI security strategy in place, as highlighted in Stanford University’s 2025 Index Report.
Predictions for the year 2026 paint a concerning picture, with Palo Alto Networks foreseeing the potential for the first major lawsuits holding executives personally accountable for rogue AI actions. The escalating and unpredictable nature of AI threats is posing challenges for many organizations, emphasizing the critical need for effective governance measures that go beyond traditional solutions like increased budgets or workforce expansion.
One of the key issues plaguing AI security is the visibility gap surrounding the usage and modification of Large Language Models (LLMs) within organizations. This lack of transparency poses a significant threat, as highlighted by a CISO who likened model Software Bill of Materials (SBOMs) to the “Wild West” of governance. Without clear visibility into the deployment and usage of AI models, incident response becomes a daunting task.
The Critical Role of SBOMs in AI Security
Recognizing the importance of addressing these challenges, the U.S. government has taken steps to mandate SBOMs for all software acquisitions. However, the focus on AI models remains inadequate, posing a substantial risk to organizations. The lack of improvement in this area underscores the urgent need for enhanced governance and visibility.
A recent survey conducted by Harness revealed alarming insights from 500 security practitioners across multiple countries. The findings indicated that 62% of respondents lacked the capability to track the usage of LLMs within their organizations. To mitigate these risks, there is a pressing need for greater transparency and rigor at the SBOM level to enhance model traceability and data security.
Despite significant investments in cybersecurity software, organizations continue to face a myriad of risks, including prompt injection, vulnerable LLM code, and jailbreaking. These sophisticated attack methods pose a serious threat to organizations, with adversaries exploiting loopholes in AI models to exfiltrate sensitive data. The inherent challenges in detecting and responding to these attacks highlight the shortcomings of legacy perimeter security systems.
As highlighted in IBM’s 2025 Cost of a Data Breach Report, organizations that experienced breaches in AI models or applications lacked proper AI access controls in 97% of cases. Shadow AI incidents, involving unauthorized AI usage, accounted for a significant portion of breaches, costing organizations substantially more than traditional intrusion attempts. The lack of visibility into model deployment further complicates incident response efforts, underscoring the need for comprehensive security measures.
Challenges in AI Model Governance
While existing standards and frameworks provide guidance on AI security, adoption continues to lag behind. Initiatives like CycloneDX 1.6 and SPDX 3.0 offer support for ML-BOMs, focusing on supply chain provenance. However, the slow uptake of these measures poses a significant risk to AI models and LLMs.
A survey conducted in June 2025 revealed that nearly half of security professionals admitted that their organizations were falling behind on SBOM requirements, with ML-BOM adoption significantly lower. The existing tooling and frameworks provide a solid foundation for enhancing AI security, but the lack of operational urgency remains a critical barrier.
Empowering AI Supply Chain Visibility
Addressing the evolving threat landscape requires a proactive approach to AI supply chain security. Organizations must prioritize building a comprehensive model inventory, defining processes to maintain its accuracy, and leveraging advanced techniques to manage shadow AI usage effectively. Human approval processes for production models, the adoption of SafeTensors, and piloting ML-BOMs for high-risk models are essential steps to enhance security.
Furthermore, organizations must treat every model deployment as a supply chain decision, implementing rigorous validation processes and ensuring compliance with vendor contracts. The upcoming year is expected to be a turning point for AI SBOMs, with regulatory requirements becoming more stringent, and the consequences of inadequate security measures becoming increasingly severe.
As the attack surface continues to expand, organizations must remain vigilant and proactive in safeguarding their AI models and LLMs. By embracing visibility and governance measures now, organizations can scale AI securely and effectively navigate the evolving threat landscape.
-
Facebook4 months agoEU Takes Action Against Instagram and Facebook for Violating Illegal Content Rules
-
Facebook4 months agoWarning: Facebook Creators Face Monetization Loss for Stealing and Reposting Videos
-
Facebook4 months agoFacebook Compliance: ICE-tracking Page Removed After US Government Intervention
-
Facebook4 months agoInstaDub: Meta’s AI Translation Tool for Instagram Videos
-
Facebook2 months agoFacebook’s New Look: A Blend of Instagram’s Style
-
Facebook2 months agoFacebook and Instagram to Reduce Personalized Ads for European Users
-
Facebook2 months agoReclaim Your Account: Facebook and Instagram Launch New Hub for Account Recovery
-
Apple4 months agoMeta discontinues Messenger apps for Windows and macOS

