The Lateral Movement Problem: When Every AI Agent Becomes a Pivot Point
In traditional network security, lateral movement is the hard part. An attacker compromises one machine, then spends hours or days finding a path to a higher-privilege system. They hunt for credentials, exploit trust relationships, and pivot between hosts - each step carrying the risk of detection.
In agentic AI, lateral movement is a feature.
Agents are designed to delegate tasks to other agents. They discover peers, share context, pass credentials, and invoke each other’s capabilities. The entire value proposition of multi-agent systems - orchestration, specialization, agent-to-agent collaboration - is also a textbook description of how an attacker would want to move through your infrastructure.
Three incidents in the past two weeks prove this isn’t theoretical.
One Week, Three Wake-Up Calls
The viral “social network for AI agents” exposed its entire production database - including API keys, private messages, and email addresses - through a misconfigured Supabase backend. Anyone could read and write to every table.
A critical vulnerability in ServiceNow’s Now Assist allowed unauthenticated attackers to impersonate any user - including admins - using only an email address, then invoke AI agents to create backdoor accounts with full privileges.
Microsoft Copilot Studio’s “Connected Agents” feature - enabled by default - allows agents to discover and invoke other agents, creating lateral movement paths that researchers exploited to gain backdoor access across agent boundaries.
Different platforms. Different architectures. Different researchers. Same fundamental problem: when agents can talk to other agents, every compromised agent becomes a pivot point to everything those agents can reach.
What Makes Agent Lateral Movement Different
Network lateral movement and agent lateral movement share the same name but operate on completely different principles.
In a network, an attacker who compromises a web server still needs to find a way to reach the database server. In a multi-agent system, a compromised “email summarizer” agent can simply ask the “database query” agent to run a query - because that’s exactly what it was designed to do.
Agent-to-agent communication doesn’t cross a firewall. It doesn’t trigger an IDS rule. It doesn’t require privilege escalation. The compromised agent already has delegated trust to invoke other agents - because that’s its job. The attack looks exactly like normal operations.
Anatomy of Agent Lateral Movement
The three incidents this week map to three distinct lateral movement patterns that security teams need to understand:
Pattern 1: Open Ecosystem Propagation (Moltbook)
Moltbook marketed itself as the “front page of the agent internet” - a Reddit-style platform where AI agents post, comment, vote, and build reputation. It attracted attention from AI leaders like Andrej Karpathy, who called it “the most incredible sci-fi takeoff-adjacent thing I have seen recently.”
What Wiz found underneath was something else entirely.
This is what the OWASP Agentic Top 10 (2026) calls ASI08: Cascading Failures operating at internet scale. One poisoned post on a platform where millions of agents consume content becomes a mass-propagation vector. The agent doesn’t need to be individually targeted - it just needs to read the feed.
Karpathy himself, after initial excitement, reversed course: “It’s way too much of a Wild West. You are putting your computer and private data at a high risk.”
Pattern 2: Enterprise Agent Hijacking (BodySnatcher)
If Moltbook shows what happens on the open internet, BodySnatcher (CVE-2025-12420) shows what happens inside your enterprise.
ServiceNow’s Now Assist AI agents are designed to execute workflows - resetting passwords, filing tickets, managing records. These agents communicate through ServiceNow’s Virtual Agent API, which uses a provider-channel architecture where external integrations (Slack, Teams, custom bots) send messages that Virtual Agent routes to the appropriate agent.
The vulnerability chain was devastating:
The attacker didn’t hack the AI agent. They didn’t inject a prompt. They didn’t exploit a model vulnerability. They impersonated a user and asked the agent to do its job. The agent worked perfectly - it just worked for the wrong person. This is ASI03: Identity & Privilege Abuse combined with ASI07: Insecure Inter-Agent Communication.
The researcher who discovered it, Aaron Costello of AppOmni Labs, put it plainly: “Attackers could have effectively ‘remote controlled’ an organization’s AI, weaponizing the very tools meant to simplify the enterprise.”
What makes this worse: the default Record Management AI Agent had the same unique ID across all ServiceNow deployments. One attack template works everywhere.
Pattern 3: Feature-as-Attack-Surface (Copilot Connected Agents)
Microsoft’s Copilot Studio introduced “Connected Agents” - a feature that lets agents discover and invoke other agents within an organization’s Copilot ecosystem. It’s designed for exactly the kind of multi-agent orchestration that enterprises want: a customer service agent delegates a billing question to a finance agent, which checks an inventory agent for order status.
Zenity Labs discovered that this feature, enabled by default, creates lateral movement paths between agents that an attacker can exploit to gain access across agent boundaries.
This is the most philosophically interesting of the three cases, because Microsoft considers it a feature, not a bug. And they’re right - in isolation. Agent-to-agent discovery and delegation IS the feature. It’s also the attack surface.
As Jonathan Wall, founder of Runloop, told ZDNET: “If, through that first agent, a malicious agent is able to connect to another agent with a better set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information.”
The Five Pivot Patterns
Across these three incidents, we can identify five distinct pivot patterns that attackers use to move between agents:
The OWASP Map
These pivot patterns map directly to three OWASP Agentic Top 10 (2026) categories that, when combined, describe the lateral movement threat surface:
ASI03 covers the identity problem: agents inheriting credentials, delegation chains that escalate privilege, and the absence of per-agent, per-task identity governance. Every lateral movement starts with an identity failure.
ASI07 covers the communication problem: unsigned messages, unauthenticated channels, no schema validation, and trust-by-default between agents. Every lateral movement relies on insecure inter-agent communication.
ASI08 covers the cascade problem: one compromised agent propagating corruption through delegation chains, shared contexts, and multi-agent workflows. Every lateral movement ends with cascading impact.
The combination of all three is what makes agent lateral movement categorically worse than network lateral movement: the identity system doesn’t exist, the communication channel is trusted, and the blast radius is unbounded.
Why It’s Getting Worse, Not Better
Three trends are converging to make agent lateral movement the dominant attack pattern of 2026:
1. Agent ecosystems are growing faster than agent security. Gartner projects 40% of enterprise applications will embed task-specific AI agents by year-end. Each new agent is both a potential entry point and a potential pivot target. The attack surface grows quadratically with the number of agents - each new agent can potentially communicate with every existing agent.
2. “Vibe coding” ships insecure agent infrastructure at scale. Moltbook’s founder explicitly stated he “didn’t write a single line of code” - he vibe-coded the entire platform. As Wiz noted, this practice “can lead to dangerous security oversights.” AI-generated code doesn’t reason about security posture or access controls. When that code builds agent infrastructure, you get internet-facing databases with no row-level security and API keys in client-side JavaScript.
3. Agent-to-agent protocols are designed for interoperability, not security. Google’s A2A (Agent-to-Agent), Anthropic’s MCP (Model Context Protocol), and platform-specific inter-agent APIs all prioritize capability and interoperability. Authentication, integrity verification, and trust boundaries are afterthoughts - or optional configurations that most deployments skip.
Google’s Mandiant threat intelligence team predicts: “By 2026, we expect the proliferation of sophisticated AI agents will escalate the shadow AI problem into a critical ‘shadow agent’ challenge. Employees will independently deploy powerful, autonomous agents for work tasks, regardless of corporate approval. This will create invisible, uncontrolled pipelines for sensitive data.”
Every shadow agent is an unmonitored pivot point in your agent graph.
What Defense Looks Like
Defending against agent lateral movement requires rethinking security at the agent-to-agent boundary - the space between agents where trust is assumed but not verified.
The Question Security Teams Should Be Asking
The question isn’t “are our AI agents secure?” - it’s “what can a compromised agent reach?”
Every agent in your environment is a node in a graph. Every delegation relationship is an edge. Every shared credential, every trust assumption, every unmonitored communication channel is a potential pivot path.
Draw your agent graph. Trace what a compromised email agent can reach through delegation chains. Trace what a compromised customer service agent can invoke. Trace what happens when an attacker impersonates an admin through a broken identity-linking mechanism.
If you can’t draw the graph, you can’t defend it.
Moltbook showed us what happens when 1.5 million agents share an open platform with no identity verification and no access controls. BodySnatcher showed us what happens when a single email address unlocks admin-level agent execution in an enterprise. Connected Agents showed us that the lateral movement surface isn’t just a misconfiguration - it’s the default.
In 2025, we secured the model. In 2026, we need to secure the space between models. Agent lateral movement is the new network lateral movement - and the tools, protocols, and practices to defend it don’t exist yet at the scale the industry needs. The organizations that build agent identity governance, inter-agent authentication, and runtime behavioral monitoring now will be the ones that survive the multi-agent era. The rest will learn about lateral movement the hard way.
Rogue Security builds runtime behavioral security for agentic AI - detecting lateral movement, delegation abuse, and cascading compromise across multi-agent systems before they escalate. Learn more at rogue.security.