Coming Soon

Your agents won't go rogue for much longer...

Privacy Terms © 2026 Rogue Security
▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close
← Back to blog
February 5, 2026 by Rogue Security Research
lateral-movementagent-to-agentmulti-agentASI03ASI07ASI08moltbookbodysnatcheragentic-security

The Lateral Movement Problem: When Every AI Agent Becomes a Pivot Point

In traditional network security, lateral movement is the hard part. An attacker compromises one machine, then spends hours or days finding a path to a higher-privilege system. They hunt for credentials, exploit trust relationships, and pivot between hosts - each step carrying the risk of detection.

In agentic AI, lateral movement is a feature.

Agents are designed to delegate tasks to other agents. They discover peers, share context, pass credentials, and invoke each other’s capabilities. The entire value proposition of multi-agent systems - orchestration, specialization, agent-to-agent collaboration - is also a textbook description of how an attacker would want to move through your infrastructure.

Three incidents in the past two weeks prove this isn’t theoretical.

One Week, Three Wake-Up Calls

January 31, 2026
Moltbook Database Exposure
Wiz Research

The viral “social network for AI agents” exposed its entire production database - including API keys, private messages, and email addresses - through a misconfigured Supabase backend. Anyone could read and write to every table.

1.5M
Agent API keys exposed
January 2026
BodySnatcher (CVE-2025-12420)
AppOmni Labs

A critical vulnerability in ServiceNow’s Now Assist allowed unauthenticated attackers to impersonate any user - including admins - using only an email address, then invoke AI agents to create backdoor accounts with full privileges.

1 email
Required for full admin takeover
February 2026
Copilot Connected Agents
Zenity Labs

Microsoft Copilot Studio’s “Connected Agents” feature - enabled by default - allows agents to discover and invoke other agents, creating lateral movement paths that researchers exploited to gain backdoor access across agent boundaries.

Default: ON
Lateral movement enabled by default

Different platforms. Different architectures. Different researchers. Same fundamental problem: when agents can talk to other agents, every compromised agent becomes a pivot point to everything those agents can reach.

What Makes Agent Lateral Movement Different

Network lateral movement and agent lateral movement share the same name but operate on completely different principles.

Network Lateral Movement
× Requires credential theft or exploit for each hop
× Leaves forensic artifacts (logs, sessions, network traffic)
× Detectable by EDR, NDR, SIEM correlation
× Bounded by network segmentation and firewall rules
× Attacker must understand each target system
× Speed limited by manual recon and exploitation
Agent Lateral Movement
Uses built-in delegation - no exploit needed per hop
Looks identical to legitimate agent-to-agent traffic
Invisible to network-layer security tools
Crosses trust boundaries via natural language requests
Agents self-describe capabilities for the attacker
Propagates at machine speed across entire swarms

In a network, an attacker who compromises a web server still needs to find a way to reach the database server. In a multi-agent system, a compromised “email summarizer” agent can simply ask the “database query” agent to run a query - because that’s exactly what it was designed to do.

The Core Problem

Agent-to-agent communication doesn’t cross a firewall. It doesn’t trigger an IDS rule. It doesn’t require privilege escalation. The compromised agent already has delegated trust to invoke other agents - because that’s its job. The attack looks exactly like normal operations.

Anatomy of Agent Lateral Movement

The three incidents this week map to three distinct lateral movement patterns that security teams need to understand:

Pattern 1: Open Ecosystem Propagation (Moltbook)

Moltbook marketed itself as the “front page of the agent internet” - a Reddit-style platform where AI agents post, comment, vote, and build reputation. It attracted attention from AI leaders like Andrej Karpathy, who called it “the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

What Wiz found underneath was something else entirely.

1
No Identity Verification
The platform had no mechanism to verify whether an “agent” was actually AI or a human with a script. 17,000 humans operated 1.5 million “agents” - an 88:1 ratio. Anyone could register millions of agents with a simple loop.
2
Full Database Exposure
A misconfigured Supabase backend granted unauthenticated read AND write access to every table. API keys, email addresses, private agent-to-agent messages - all accessible to anyone on the internet.
3
Credential Leakage in Agent Messages
Private conversations between agents contained plaintext third-party credentials - including OpenAI API keys. Agents were sharing secrets with other agents, and none of it was encrypted or access-controlled.
4
Write Access Enables Cascading Injection
Anyone could modify live posts on the platform. Since autonomous agents consume Moltbook content as input, an attacker could inject prompt injection payloads that propagate to every agent reading the feed.

This is what the OWASP Agentic Top 10 (2026) calls ASI08: Cascading Failures operating at internet scale. One poisoned post on a platform where millions of agents consume content becomes a mass-propagation vector. The agent doesn’t need to be individually targeted - it just needs to read the feed.

”These systems are operating as ‘you.’ They sit above operating-system protections. Application isolation doesn’t apply.”
Nathan Hamiel, Security Researcher

Karpathy himself, after initial excitement, reversed course: “It’s way too much of a Wild West. You are putting your computer and private data at a high risk.”

Pattern 2: Enterprise Agent Hijacking (BodySnatcher)

If Moltbook shows what happens on the open internet, BodySnatcher (CVE-2025-12420) shows what happens inside your enterprise.

ServiceNow’s Now Assist AI agents are designed to execute workflows - resetting passwords, filing tickets, managing records. These agents communicate through ServiceNow’s Virtual Agent API, which uses a provider-channel architecture where external integrations (Slack, Teams, custom bots) send messages that Virtual Agent routes to the appropriate agent.

The vulnerability chain was devastating:

1
Shared Static Secret
Multiple AI agent providers shared a single, non-rotating static credential for message authentication. Compromise one, access all.
2
Email-Based Identity Linking
The auto-linking feature matched external users to ServiceNow accounts using only an email address - no MFA, no SSO, no secondary verification. Know someone’s email, become them.
3
Agent-to-Agent Escalation
Once impersonating an admin, the attacker could invoke any AI agent through the Virtual Agent API - including a default “Record Management AI Agent” that could create records in any arbitrary table.
4
Backdoor Account Creation
The attacker prompts the AI agent to create a new user with admin privileges. The agent complies - it’s following instructions from what appears to be an authenticated administrator. Full persistent access achieved.
The BodySnatcher Insight

The attacker didn’t hack the AI agent. They didn’t inject a prompt. They didn’t exploit a model vulnerability. They impersonated a user and asked the agent to do its job. The agent worked perfectly - it just worked for the wrong person. This is ASI03: Identity & Privilege Abuse combined with ASI07: Insecure Inter-Agent Communication.

The researcher who discovered it, Aaron Costello of AppOmni Labs, put it plainly: “Attackers could have effectively ‘remote controlled’ an organization’s AI, weaponizing the very tools meant to simplify the enterprise.”

What makes this worse: the default Record Management AI Agent had the same unique ID across all ServiceNow deployments. One attack template works everywhere.

Pattern 3: Feature-as-Attack-Surface (Copilot Connected Agents)

Microsoft’s Copilot Studio introduced “Connected Agents” - a feature that lets agents discover and invoke other agents within an organization’s Copilot ecosystem. It’s designed for exactly the kind of multi-agent orchestration that enterprises want: a customer service agent delegates a billing question to a finance agent, which checks an inventory agent for order status.

Zenity Labs discovered that this feature, enabled by default, creates lateral movement paths between agents that an attacker can exploit to gain access across agent boundaries.

This is the most philosophically interesting of the three cases, because Microsoft considers it a feature, not a bug. And they’re right - in isolation. Agent-to-agent discovery and delegation IS the feature. It’s also the attack surface.

Agent Lateral Movement via Connected Agents
🤖
Low-Priv Agent
Compromised
🤖
Finance Agent
Delegated To
🤖
DB Admin Agent
Escalated
🗄️
Sensitive Data
Exfiltrated

As Jonathan Wall, founder of Runloop, told ZDNET: “If, through that first agent, a malicious agent is able to connect to another agent with a better set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information.”

The Five Pivot Patterns

Across these three incidents, we can identify five distinct pivot patterns that attackers use to move between agents:

Pivot 1: Credential Inheritance
Shared Secrets Between Agents
Agents share credentials, API keys, or authentication tokens - either by design (shared service accounts) or by accident (leaked in inter-agent messages).
Moltbook: OpenAI API keys shared in plaintext agent DMs. BodySnatcher: Multiple agent providers sharing a single static secret.
Pivot 2: Delegation Chains
Trust Flowing Through Agent Graphs
Agent A delegates to Agent B, passing its permissions along. A compromised low-privilege agent invokes a high-privilege agent, inheriting access it was never meant to have.
Copilot Connected Agents: Lateral movement through delegation - each hop escalates privileges. BodySnatcher: Impersonated admin invokes Record Management agent to create backdoor accounts.
Pivot 3: Discovery Exploitation
Agents Advertising Their Own Attack Surface
Agent discovery protocols let agents find and understand each other’s capabilities. An attacker uses discovery to map the entire agent ecosystem - finding the highest-value targets automatically.
ServiceNow Agent-to-Agent Discovery: AppOmni showed attackers can “trick AI agents into recruiting more powerful AI agents to fulfill a malicious task.”
Pivot 4: Content Poisoning
Injecting Instructions Into Shared Contexts
Agents consume content from shared platforms, feeds, or knowledge bases. Poisoning the shared context propagates instructions to every agent that reads it.
Moltbook: Write access to all posts enabled mass prompt injection across every agent consuming the feed. One poisoned post, millions of affected agents.
Pivot 5: Identity Spoofing
Impersonating Agents or Their Principals
Weak or absent identity verification lets attackers impersonate legitimate agents or the humans behind them, inheriting their trust relationships and permissions.
Moltbook: No verification of agent identity at all - humans operating fleets of bots. BodySnatcher: Email-only identity linking bypassing MFA and SSO.

The OWASP Map

These pivot patterns map directly to three OWASP Agentic Top 10 (2026) categories that, when combined, describe the lateral movement threat surface:

ASI01ASI02ASI03 - Identity & Privilege AbuseASI04ASI05ASI06ASI07 - Insecure Inter-Agent CommunicationASI08 - Cascading FailuresASI09ASI10

ASI03 covers the identity problem: agents inheriting credentials, delegation chains that escalate privilege, and the absence of per-agent, per-task identity governance. Every lateral movement starts with an identity failure.

ASI07 covers the communication problem: unsigned messages, unauthenticated channels, no schema validation, and trust-by-default between agents. Every lateral movement relies on insecure inter-agent communication.

ASI08 covers the cascade problem: one compromised agent propagating corruption through delegation chains, shared contexts, and multi-agent workflows. Every lateral movement ends with cascading impact.

The combination of all three is what makes agent lateral movement categorically worse than network lateral movement: the identity system doesn’t exist, the communication channel is trusted, and the blast radius is unbounded.

Why It’s Getting Worse, Not Better

40%
of enterprise apps will embed AI agents by end of 2026 - Gartner

Three trends are converging to make agent lateral movement the dominant attack pattern of 2026:

1. Agent ecosystems are growing faster than agent security. Gartner projects 40% of enterprise applications will embed task-specific AI agents by year-end. Each new agent is both a potential entry point and a potential pivot target. The attack surface grows quadratically with the number of agents - each new agent can potentially communicate with every existing agent.

2. “Vibe coding” ships insecure agent infrastructure at scale. Moltbook’s founder explicitly stated he “didn’t write a single line of code” - he vibe-coded the entire platform. As Wiz noted, this practice “can lead to dangerous security oversights.” AI-generated code doesn’t reason about security posture or access controls. When that code builds agent infrastructure, you get internet-facing databases with no row-level security and API keys in client-side JavaScript.

3. Agent-to-agent protocols are designed for interoperability, not security. Google’s A2A (Agent-to-Agent), Anthropic’s MCP (Model Context Protocol), and platform-specific inter-agent APIs all prioritize capability and interoperability. Authentication, integrity verification, and trust boundaries are afterthoughts - or optional configurations that most deployments skip.

The Shadow Agent Problem

Google’s Mandiant threat intelligence team predicts: “By 2026, we expect the proliferation of sophisticated AI agents will escalate the shadow AI problem into a critical ‘shadow agent’ challenge. Employees will independently deploy powerful, autonomous agents for work tasks, regardless of corporate approval. This will create invisible, uncontrolled pipelines for sensitive data.”

Every shadow agent is an unmonitored pivot point in your agent graph.

What Defense Looks Like

Defending against agent lateral movement requires rethinking security at the agent-to-agent boundary - the space between agents where trust is assumed but not verified.

01
Agent Identity Governance
Every agent needs a distinct, verifiable identity - not a shared service account, not inherited user credentials. Issue short-lived, task-scoped tokens per agent per interaction. Require re-authentication at every privilege boundary. Treat agents as managed non-human identities with the same lifecycle controls as service accounts.
02
Inter-Agent Authentication
Every agent-to-agent message must be authenticated and integrity-verified. Sign messages with per-agent credentials. Validate sender identity and capability claims before processing requests. Protect exchanges with nonces and session identifiers. Never trust an agent just because it’s in the same ecosystem.
03
Delegation Boundary Enforcement
Map your agent delegation graph. Define which agents can invoke which other agents, with what permissions, for what purposes. Implement policy enforcement at every delegation point - not just at the entry. A finance agent should never be able to invoke a database admin agent, regardless of who asked it to.
04
Runtime Behavioral Monitoring
Lateral movement is invisible at the network layer but visible at the behavioral layer. Monitor agent-to-agent invocation patterns: which agents are talking to which agents, how often, with what parameters. Flag anomalous delegation chains - a summarizer agent invoking a record-creation agent is a behavioral deviation, regardless of whether the request looks legitimate.
05
Agent Inventory and Discovery Control
You cannot secure what you cannot see. Maintain a live inventory of every agent in your environment - including shadow agents deployed by business units. Control agent discovery protocols: agents should not self-advertise capabilities to unknown peers. Disable features like “Connected Agents” unless explicitly needed and secured.
06
Blast Radius Containment
Assume any agent can be compromised. Design the system so that a compromised agent’s reach is bounded. Sandbox agents with least privilege. Limit the number of agents any single agent can invoke. Implement circuit breakers that halt delegation chains when anomalous patterns are detected. If one agent falls, the cascade stops.

The Question Security Teams Should Be Asking

The question isn’t “are our AI agents secure?” - it’s “what can a compromised agent reach?”

Every agent in your environment is a node in a graph. Every delegation relationship is an edge. Every shared credential, every trust assumption, every unmonitored communication channel is a potential pivot path.

Draw your agent graph. Trace what a compromised email agent can reach through delegation chains. Trace what a compromised customer service agent can invoke. Trace what happens when an attacker impersonates an admin through a broken identity-linking mechanism.

If you can’t draw the graph, you can’t defend it.

Moltbook showed us what happens when 1.5 million agents share an open platform with no identity verification and no access controls. BodySnatcher showed us what happens when a single email address unlocks admin-level agent execution in an enterprise. Connected Agents showed us that the lateral movement surface isn’t just a misconfiguration - it’s the default.

The Bottom Line

In 2025, we secured the model. In 2026, we need to secure the space between models. Agent lateral movement is the new network lateral movement - and the tools, protocols, and practices to defend it don’t exist yet at the scale the industry needs. The organizations that build agent identity governance, inter-agent authentication, and runtime behavioral monitoring now will be the ones that survive the multi-agent era. The rest will learn about lateral movement the hard way.


Rogue Security builds runtime behavioral security for agentic AI - detecting lateral movement, delegation abuse, and cascading compromise across multi-agent systems before they escalate. Learn more at rogue.security.