Coming Soon

Your agents won't go rogue for much longer...

Privacy Terms © 2026 Rogue Security
▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close
← Back to blog
February 18, 2026 by Rogue Security Research
echoleakzero-clickcopilot-securityprompt-injectionagentic-securitycve-2025-32711

EchoLeak: When AI Agents Become Double Agents

An employee asks their AI assistant to summarize yesterday’s emails. Standard request. Happens thousands of times per hour across enterprises worldwide.

What they don’t know: one of those emails contained hidden instructions. No attachment to click. No link to hover over. Just invisible text that their AI read, understood, and obeyed.

Within milliseconds, the agent silently searched their SharePoint, found confidential documents, and exfiltrated the contents through a tracking pixel - all while presenting a helpful email summary.

The employee saw nothing unusual. The security team saw nothing unusual. The data was already gone.

This is EchoLeak. The first documented zero-click attack against an enterprise AI agent.

CVSS 9.3 CriticalCVE-2025-32711
Zero-click data exfiltration via indirect prompt injection in Microsoft 365 Copilot RAG pipeline

The Numbers

0
clicks required
9.3
CVSS severity score
10K+
businesses potentially affected

How Your AI Becomes a Double Agent

EchoLeak exploits a fundamental architectural weakness: AI agents that retrieve and reason over untrusted data cannot reliably distinguish between legitimate context and adversarial instructions.

Here’s the attack chain:

EchoLeak Kill Chain

1Delivery: Attacker sends an email containing hidden instructions embedded in markdown formatting - invisible to the human recipient, perfectly readable by the AI
2Trigger: Victim asks Copilot any legitimate question. The RAG engine retrieves the malicious email as “relevant context”
3Execution: Hidden instructions command the agent to search for sensitive files using the victim’s access privileges
4Exfiltration: Data is smuggled out via image URLs to trusted Microsoft domains (Teams, SharePoint) - bypassing Content Security Policy entirely

The attacker never needed credentials. Never exploited a software bug. Never sent malware. They simply sent an email that the AI read and followed.

Why Traditional Security Missed It

EchoLeak bypassed multiple layers of defense by design:

Prompt Injection Classifiers - Bypassed
The malicious instructions were written to sound like they were addressing a human, not an AI. No mention of “Copilot” or “AI” anywhere. The ML-based classifiers saw natural-sounding email text and passed it through.

Link Redaction Filters - Bypassed
Researchers used reference-style markdown links instead of inline URLs. The filter looked for text patterns. It didn’t catch [text][ref] with the URL defined elsewhere.

Content Security Policy - Bypassed
Exfiltration was routed through legitimate Microsoft Teams and SharePoint URLs - domains explicitly trusted by the CSP. From the network’s perspective, it looked like normal internal traffic.

Egress Monitoring - Blind
Data was encoded in URL parameters of what appeared to be image requests to trusted Microsoft infrastructure. No unusual destinations. No large data transfers. Just a broken image icon the user probably ignored.

The core problem: Every defense was checking whether the input was malicious. None were checking whether the AI’s resulting actions were authorized.

What Could Be Stolen

The scope of vulnerable data was everything the AI agent could access with the victim’s credentials:

  • OneDrive documents and file contents
  • SharePoint sites and libraries
  • Teams chat histories and messages
  • Outlook emails and calendar entries
  • Organizational context and user directories
  • Any indexed enterprise data within Copilot’s retrieval scope

The attacker controlled what the agent searched for. The victim’s permissions determined what it could find.

The Confused Deputy at Scale

EchoLeak is a textbook “confused deputy” attack - but operating at a scale traditional security never anticipated.

The AI agent has:

  • The victim’s identity - full OAuth token, all access permissions
  • No concept of intent - it can’t distinguish “search for this because the user asked” from “search for this because a malicious email told me to”
  • Broad retrieval scope - designed to helpfully pull in relevant context from across the organization
Traditional Insider Threat

Requires a compromised or malicious employee. Takes time. Leaves behavioral signals. Limited by human speed.

AI Agent as Insider Threat

Requires only an email. Instant execution. Looks like normal AI assistance. Operates at machine speed across all accessible data.

The agent becomes an unwitting insider - with better access than most employees and no awareness that it’s being weaponized.

The Disclosure Timeline

June 2025
Security researchers disclose CVE-2025-32711 to Microsoft
June 2025 Patch Tuesday
Microsoft deploys fix, confirms no evidence of exploitation in the wild
Present
Underlying architectural pattern remains exploitable across other AI agents

Microsoft patched this specific vulnerability. But the attack pattern - using untrusted input to hijack an AI agent’s privileged actions - isn’t unique to Copilot.

Why Patching Isn’t Enough

EchoLeak was fixed. But the conditions that made it possible remain:

RAG systems retrieve untrusted data by design. That’s their value proposition - pulling in relevant context from emails, documents, and messages. The attack surface is the feature.

LLMs can’t reliably separate instructions from data. Every document, every email, every piece of retrieved context is a potential injection vector. This is a fundamental limitation of transformer architectures, not a bug to be fixed.

Agent permissions mirror user permissions. If you can read a file, your AI can read it too - regardless of whether the current task warrants that access. There’s no contextual authorization layer asking “should this agent access this data for this specific request?”

The uncomfortable truth: Every AI agent with RAG capabilities and broad data access is a potential EchoLeak waiting to happen. The specific bypass techniques will differ. The architectural vulnerability is the same.

What This Means for Enterprise AI

EchoLeak marks a turning point. It demonstrated that:

  1. Zero-click AI attacks are real. No phishing links. No malware downloads. No social engineering the human. Just poison the data the AI reads.

  2. Traditional security tools are blind. Perimeter security, endpoint protection, email filtering - none of them inspect what the AI does after it ingests content.

  3. The blast radius of AI vulnerabilities is organizational. One compromised agent interaction can exfiltrate data from across the entire enterprise - wherever the user has access.

  4. Detection is nearly impossible after the fact. From the logs, it looks like the AI did its job. Searched some files. Rendered some images. Presented a summary. Nothing anomalous.

The Security Model AI Agents Need

Defending against EchoLeak and its variants requires rethinking how we secure AI agents:

Runtime behavioral monitoring: Don’t just validate inputs - monitor what the agent actually does. Flag when retrieval patterns, data access, or output generation deviate from expected behavior for the task.

Contextual authorization: An agent helping summarize emails shouldn’t need access to SharePoint financial documents. Implement least-privilege at the task level, not just the user level.

Output inspection: Before the agent renders images, generates links, or produces responses - validate that the content matches the expected output for the requested task. Catch exfiltration attempts at the output layer.

Retrieval boundaries: Separate trusted (user-provided) context from untrusted (externally-sourced) context. Apply different trust levels to instructions found in each.

Continuous threat modeling: Assume every new AI capability creates new attack surface. Red team your agents before attackers do.

The Agentic Security Gap

We’re deploying AI agents with unprecedented access to enterprise data. We’re securing them with tools designed for a world where code executed instructions and humans made decisions.

EchoLeak is a preview of what happens when those assumptions break down. The agent followed instructions exactly as designed. The instructions just came from an attacker.

Your AI agents are the new insiders. They have broad access. They act autonomously. They can be manipulated through data alone.

The question isn’t whether your agents could be turned into double agents. It’s whether you’d know if they already had been.


EchoLeak (CVE-2025-32711) was responsibly disclosed by security researchers at Aim Security and patched by Microsoft in June 2025. This analysis is based on published research and is intended for educational purposes to help organizations understand the evolving threat landscape facing enterprise AI deployments.