Ambient Attack: When AI Assistants Process Content You Never Opened
You didn’t click. You didn’t open. You just browsed to the folder.
And your AI assistant just exfiltrated your data.
Microsoft’s March 2026 Patch Tuesday included an unusual disclosure: CVE-2026-26144, a high-severity information disclosure flaw that combines cross-site scripting with indirect prompt injection to weaponize Microsoft Copilot for zero-click data exfiltration.
The attack vector? The preview pane.
The Vulnerability
An improper neutralization of input vulnerability in Microsoft Excel allows unauthorized attackers to cause Copilot Agent mode to exfiltrate data via unintended network egress, enabling a zero-click information disclosure attack.
The attack works because Excel fails to properly sanitize malicious content embedded in spreadsheet files. Normally, when a threat actor sends an Excel file containing a malicious link or hidden instruction, the program should neutralize that input before processing it.
But here’s the twist: the malicious content executes even if the victim never opens the file - viewing it in the preview pane is enough.
The Attack Chain
arrives via email
to folder
renders content
hidden prompt
exfiltration
The attack combines two vulnerability classes:
- Cross-Site Scripting (XSS): Malicious input embedded in the Excel file executes when rendered in the preview pane
- Indirect Prompt Injection: The XSS payload contains instructions that Copilot interprets as legitimate commands
When Copilot is active, the hidden instruction tells the AI assistant to send sensitive data - financial records, intellectual property, operational data - to an attacker-controlled server. No clicks. No confirmations. No user awareness.
Welcome to Ambient Attacks
This vulnerability represents a broader pattern we’re calling ambient context attacks: exploits that target the passive data processing AI assistants perform in the background.
[!] Preview Pane
[*] Email Previews
[*] Notifications
[*] Calendar Events
[*] Search Results
[*] Browser Tabs
[*] Chat Previews
[*] File Thumbnails
Modern AI assistants don’t wait for explicit commands. They continuously process environmental context - email subjects, calendar titles, document previews, browser tabs - to provide proactive assistance.
This is a feature. It’s also an attack surface.
The Traditional Model Is Broken
- [+] User must open file
- [+] User must click link
- [+] Macros require enable
- [+] Attack is user-initiated
- [+] User can spot red flags
- [!] Preview pane is enough
- [!] No clicks required
- [!] AI processes silently
- [!] Attack is automatic
- [!] No visible indicators
For decades, security awareness training taught users: “Don’t open suspicious attachments.” This advice assumed a single, discrete moment when a user decides to engage with potentially malicious content.
AI assistants eliminate that moment.
When Copilot, Gemini, or Claude continuously analyze ambient context to “help,” they process content the user never explicitly chose to view. The preview pane becomes an execution surface. Email notifications become injection vectors. Calendar event titles become command interfaces.
Not Just Excel: The Preview Pane Epidemic
CVE-2026-26144 wasn’t alone in March’s Patch Tuesday. Microsoft also fixed two additional critical flaws exploitable via the preview pane:
A type confusion vulnerability allows remote code execution when viewing malicious files in the preview pane. The flaw occurs when Microsoft Office accesses a resource using an incompatible data type, causing incorrect memory handling.
An untrusted pointer dereference in Microsoft Office allows remote attackers to execute code locally. The issue occurs when Office improperly handles memory pointers during preview rendering.
These preview pane vulnerabilities have become increasingly common over the past year. It’s only a matter of time before they appear in active exploits at scale.
OWASP Agentic Top 10 Mapping
This attack pattern maps directly to two critical risks in the OWASP Top 10 for Agentic Applications:
The attacker embeds malicious instructions in external data (the Excel file) that the AI assistant processes. The injection doesn’t require direct interaction - the agent consumes the payload through its ambient context processing.
Copilot Agent has network egress capabilities - the ability to send data to external servers. This permission, combined with its access to sensitive document content, creates the exfiltration channel the attack exploits.
The attack succeeds because:
- The AI assistant has access to sensitive context (document content)
- The AI assistant has dangerous capabilities (network egress)
- No security boundary exists between ambient context and explicit commands
- No human-in-the-loop checkpoint validates the exfiltration request
Corporate Data at Risk
”Information disclosure vulnerabilities are especially dangerous in corporate environments where Excel files often contain financial data, intellectual property, or operational records. If exploited, attackers could silently extract confidential information from internal systems without triggering obvious alerts.”
- Alex Vovk, CEO, Action1
Consider the attack from an adversary’s perspective:
- Craft an Excel file with hidden prompt injection payload
- Send to target via email (or compromise a SharePoint folder)
- Wait for someone to browse to the folder
- Copilot processes the preview and exfiltrates data
- No malware signatures. No suspicious process behavior. Just a normal AI assistant “helping.”
Mitigations
The Broader Lesson
CVE-2026-26144 is a preview of the ambient attack era. As AI assistants become more integrated into productivity tools - processing emails, documents, calendars, and messages to provide proactive assistance - the attack surface expands to include every piece of content they might “notice.”
The security industry spent years teaching users not to click suspicious links. We now face a harder challenge: protecting users from attacks that require no action at all.
Traditional perimeter security assumes a boundary between trusted and untrusted content. AI assistants blur that boundary by design - their value comes from connecting context across applications and data sources.
Protecting these systems requires a new approach: runtime security that validates AI behavior at execution time, regardless of input source. When an AI assistant attempts to exfiltrate data, send a message, or execute a command, security controls must evaluate whether that action aligns with user intent - not just whether the input looked safe.
The preview pane attack worked because nothing checked whether the user actually wanted Copilot to send their financial data to an external server. The input sanitization failed, but even if it had succeeded, the fundamental problem would remain: AI assistants acting on ambient context without validation.
Rogue Security provides runtime security for AI agents - detecting and blocking malicious behavior before it executes, regardless of how the attack enters the system.
Learn more at rogue.security