Connect with us

Tech News

Architectural Boundaries: Exploring the Blast Radius of Co-locating AI Agents and Untrusted Code

Published

on

AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.

RSAC 2026 Keynotes Highlight Zero Trust for AI Agents

During the recent RSAC 2026 event, keynotes from Microsoft, Cisco, CrowdStrike, and Splunk all converged on a common theme: the need for a zero trust approach when it comes to AI agents. This shared conclusion, reached independently by the four companies, underscores the critical importance of extending zero trust principles to artificial intelligence.

Vasu Jakkal of Microsoft emphasized the necessity of zero trust extending to AI, while Jeetu Patel from Cisco called for a shift from access control to action control. CrowdStrike’s George Kurtz identified AI governance as a significant gap in enterprise technology, and Splunk’s John Morgan advocated for an agentic trust and governance model. Despite coming from different companies and stages, all four keynotes pointed to the same problem: the need for a zero trust approach to AI agents.

Matt Caulfield, VP of Product for Identity and Duo at Cisco, echoed these sentiments in an interview at RSAC, emphasizing the importance of continuously verifying and scrutinizing every action taken by AI agents. Caulfield stressed that authenticating once is not enough, as agents must be monitored closely to prevent rogue behavior.

The Current State of AI Agent Security

According to PwC’s 2025 AI Agent Survey, 79% of organizations are already using AI agents, yet only 14.4% have full security approval for their agent fleet. The Gravitee State of AI Agent Security 2026 report revealed that AI governance policies are lacking in 74% of organizations surveyed.

A survey presented at RSAC by the Cloud Security Alliance found that only 26% of organizations have AI governance policies in place. The resulting gap between deployment velocity and security readiness has been described as a governance emergency by the CSA’s Agentic Trust Framework.

See also  AI Agents: Your Ultimate Work Assistants

At RSAC, cybersecurity leaders and industry executives acknowledged the urgent need for better security measures for AI agents. However, only two companies have shipped architectures that address this issue, highlighting the significant risk posed by the current state of AI agent security.

The Monolithic Agent Problem

The default enterprise agent pattern involves a monolithic container where every component trusts each other. This setup poses a significant security risk, as a prompt injection can lead to a complete compromise of the container and connected services.

CrowdStrike’s Elia Zaitsev compared securing agents to securing highly privileged users, emphasizing the need for a defense-in-depth strategy. CrowdStrike CEO George Kurtz highlighted the ClawHavoc campaign targeting the OpenClaw agentic framework, underscoring the real-world security threats faced by AI agents.

Innovative Approaches to AI Agent Security

Anthropic’s Managed Agents architecture, launched in public beta, separates agents into three components – a brain, hands, and session – that do not trust each other. This design eliminates the risk of credentials being compromised within the container and offers improved performance and session durability.

Nvidia’s NemoClaw takes a different approach by stacking multiple security layers around the agent within the execution environment. While this approach enhances runtime visibility, it also increases operational costs due to the need for manual approval of every action.

Addressing the Credential Proximity Gap

Anthropic’s architecture removes credentials from the blast radius entirely, significantly reducing the risk of credential exposure in case of a compromise. On the other hand, NemoClaw monitors every action within the shared sandbox, providing strong runtime visibility but at a higher operational cost.

See also  Exploring the Depths: A Collection of Jenny Nicholson's Top Video Essays

Both architectures represent a step forward from the monolithic default, but they differ in how close credentials sit to the execution environment. This distinction is crucial for security teams, especially in mitigating the risk of indirect prompt injections that could compromise the agent’s actions.

Conclusion: The Path to Zero Trust Architecture for AI Agents

As organizations increasingly rely on AI agents, the need for a zero trust approach to their security becomes paramount. The audit grid presented in this article outlines key priorities for organizations looking to enhance the security of their AI agents.

By addressing issues such as credential isolation, session recovery, observability, and indirect prompt injection, organizations can strengthen the security posture of their AI agents. The gap between deployment velocity and security readiness must be closed to prevent future breaches in AI agent security.

Trending