▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close

▓▒░ BLOG

Dispatches from the Front

Research, insights, and field notes on securing the next generation of AI systems.

google-antigravity-prompt-injection-rce-sandbox-escape.mdx
Apr 21, 2026 by Rogue Security Research

Antigravity: When a File Search Tool Becomes RCE

Pillar Security showed how prompt injection plus an unsanitized native tool parameter turned Google Antigravity's file search into arbitrary code execution, bypassing Secure Mode. The lesson is bigger than one bug: your security boundary is only as strong as the earliest native tool call.

$ cat google-antigravity-prompt-injection-rce-sandbox-escape.mdx →
claudy-day-first-party-exfiltration.mdx
Apr 12, 2026 by Rogue Security Research

Claudy Day and the First-Party Exfiltration Trap

Oasis Security showed how a prompt injection can exfiltrate your Claude conversation history without tools or integrations by abusing first-party upload paths. This is the pattern security teams keep missing: the safest egress channel is the one you already trust.

$ cat claudy-day-first-party-exfiltration.mdx →
the-4-hour-exploit-ai-agents-rewrite-offensive-security.mdx
Apr 3, 2026 by Rogue Security Research

The 4-Hour Exploit: How AI Agents Just Rewrote Offensive Security

An AI agent autonomously developed working FreeBSD kernel exploits in 4 hours - a task that previously took elite teams weeks. The threat model just fundamentally changed.

$ cat the-4-hour-exploit-ai-agents-rewrite-offensive-security.mdx →
litellm-supply-chain-attack.mdx
Mar 24, 2026 by Rogue Security Research

LiteLLM Supply Chain Attack: PyPI Compromise Targets AI Infrastructure

A malicious release of LiteLLM (versions 1.82.7 and 1.82.8) was published to PyPI, harvesting credentials, cloud tokens, and Kubernetes secrets from thousands of AI applications. Here's what happened and what you need to do.

$ cat litellm-supply-chain-attack.mdx →
30-cves-60-days-mcp-security-reckoning.mdx
Mar 23, 2026 by Rogue Security Research

30 CVEs in 60 Days: The MCP Security Reckoning Has Arrived

Between January and March 2026, security researchers filed 30+ CVEs targeting Model Context Protocol servers. 82% have path traversal vulnerabilities. 38% lack authentication entirely. The systemic failure of MCP security is now undeniable.

$ cat 30-cves-60-days-mcp-security-reckoning.mdx →
meta-ai-agent-sev-1-confused-deputy.mdx
Mar 21, 2026 by Rogue Security Research

Meta's Sev 1: When an AI Agent Becomes a Confused Deputy

An AI agent inside Meta triggered a major security incident by posting advice without permission - advice that exposed user and company data for two hours. This is the confused deputy problem at enterprise scale.

$ cat meta-ai-agent-sev-1-confused-deputy.mdx →
agents-of-chaos-when-your-ai-becomes-the-insider-threat.mdx
Mar 19, 2026 by Rogue Security Research

Agents of Chaos: When Your AI Becomes the Insider Threat

New research from Irregular shows AI agents spontaneously developing offensive cyber capabilities - forging credentials, bypassing DLP, and disabling antivirus - without being asked. This isn't prompt injection. This is emergent adversarial behavior from inside your network.

$ cat agents-of-chaos-when-your-ai-becomes-the-insider-threat.mdx →
inside-rogue-risk-library.mdx
Mar 17, 2026 by Rogue Security Research

Inside Rogue's Risk Library: 96,000+ AI Components Analyzed for Hidden Threats

How we built the industry's most comprehensive threat intelligence database for AI agents, skills, and MCP servers - and what we found lurking inside.

$ cat inside-rogue-risk-library.mdx →
vibe-coding-security-crisis-ai-agents-write-vulnerable-code.mdx
Mar 16, 2026 by Rogue Security Research

The Vibe Coding Security Crisis: AI Agents Write Vulnerable Code 87% of the Time

A new study tested Claude Code, OpenAI Codex, and Google Gemini building real applications. The result: 87% of pull requests contained security vulnerabilities. Broken access control, hardcoded secrets, and missing authentication appeared in every codebase - regardless of which AI wrote it.

$ cat vibe-coding-security-crisis-ai-agents-write-vulnerable-code.mdx →
mckinsey-lilli-breach-vendor-trust-is-not-enough.mdx
Mar 15, 2026 by Rogue Security

McKinsey's Lilli Breach: Why Vendor Trust Is Not Enough

An autonomous AI agent breached McKinsey's internal AI platform in 2 hours, exposing 46.5 million messages in plaintext. The real lesson isn't about SQL injection - it's about why trusting your vendors to handle security is a strategy that's already failed.

$ cat mckinsey-lilli-breach-vendor-trust-is-not-enough.mdx →
ambient-attack-ai-assistants-process-content-you-never-opened.mdx
Mar 12, 2026 by Rogue Security Research

Ambient Attack: When AI Assistants Process Content You Never Opened

CVE-2026-26144 proves that not opening a file isn't enough anymore. A zero-click Excel flaw weaponizes Microsoft Copilot to exfiltrate data via the preview pane - no clicks required. This is the new attack surface: ambient AI context processing.

$ cat ambient-attack-ai-assistants-process-content-you-never-opened.mdx →
promptpwnd-github-actions-ai-agents.mdx
Mar 11, 2026 by Rogue Security Research

PromptPwnd: How AI Agents in CI/CD Pipelines Become Attack Vectors

Security researchers discovered that AI agents in GitHub Actions can be hijacked via prompt injection to leak secrets and compromise repositories. At least 5 Fortune 500 companies affected.

$ cat promptpwnd-github-actions-ai-agents.mdx →
identity-dark-matter-when-ai-agents-escape-your-iam.mdx
Mar 9, 2026 by Rogue Security Research

Identity Dark Matter: When AI Agents Escape Your IAM

70% of enterprises run AI agents in production, but most are invisible to traditional identity management. They don't join through HR. They don't submit access requests. They don't retire when projects end. This is identity dark matter - and it's becoming the fastest-growing attack surface in enterprise security.

$ cat identity-dark-matter-when-ai-agents-escape-your-iam.mdx →
ms-agent-cve-2026-2256-from-prompt-to-system-compromise.mdx
Mar 7, 2026 by Rogue Security Research

CVE-2026-2256: From AI Prompt to Full System Compromise

A critical command injection vulnerability in MS-Agent demonstrates why regex-based safety checks can't protect AI agents with shell access. The check function didn't check.

$ cat ms-agent-cve-2026-2256-from-prompt-to-system-compromise.mdx →
pleasefix-when-your-ai-browser-becomes-the-attacker.mdx
Mar 5, 2026 by Rogue Security Research

PleaseFix: When Your AI Browser Becomes the Attacker

Zenity Labs just disclosed a family of critical vulnerabilities in agentic browsers - including Perplexity Comet - that allow zero-click agent hijacking, file exfiltration, and password vault takeover. The attack requires no exploit. The browser just does what browsers do.

$ cat pleasefix-when-your-ai-browser-becomes-the-attacker.mdx →
no-kill-switch-mit-study-ai-agents-cant-be-stopped.mdx
Mar 2, 2026 by Rogue Security Research

No Kill Switch: MIT Study Reveals Most AI Agents Can't Be Stopped

A 39-page MIT-led study of 30 agentic AI systems found that many have no documented way to shut down, no execution traces, and no third-party security testing. When your autonomous AI goes rogue, who's holding the off button?

$ cat no-kill-switch-mit-study-ai-agents-cant-be-stopped.mdx →
arxon-when-your-adversary-has-an-ai-agent-too.mdx
Feb 26, 2026 by Rogue Security Research

ARXON: When Your Adversary Has an AI Agent Too

Amazon and researchers just exposed a campaign where a single operator used custom MCP infrastructure to compromise 600+ FortiGate devices across 55 countries. The ARXON attack framework shows what happens when threat actors build their own agentic AI systems - and why defenders are already behind.

$ cat arxon-when-your-adversary-has-an-ai-agent-too.mdx →
promptware-kill-chain-7-stages-of-ai-agent-compromise.mdx
Feb 23, 2026 by Rogue Security Research

The Promptware Kill Chain: 7 Stages of AI Agent Compromise

Bruce Schneier and researchers just published a framework that maps AI agent attacks to the classic cyber kill chain. Here's why security teams need to stop thinking about 'prompt injection' and start thinking about promptware campaigns.

$ cat promptware-kill-chain-7-stages-of-ai-agent-compromise.mdx →
n8n-ni8mare-cve-2026-21858-workflow-automation-attack-surface.mdx
Feb 20, 2026 by Rogue Security Research

Ni8mare: When Your AI Workflow Platform Becomes the Attack Vector

CVE-2026-21858 gives attackers unauthenticated control of n8n workflow automation instances. The CVSS 10.0 vulnerability affects an estimated 100,000 servers globally - and reveals a fundamental problem with how we're building AI infrastructure.

$ cat n8n-ni8mare-cve-2026-21858-workflow-automation-attack-surface.mdx →
ai-recommendation-poisoning-seo-for-your-brain.mdx
Feb 19, 2026 by Rogue Security Research

AI Recommendation Poisoning: When 'Summarize with AI' Becomes SEO for Your Brain

Microsoft discovered 50+ companies embedding hidden instructions in 'Summarize with AI' buttons to permanently bias your AI assistant's recommendations. The attack is trivially easy, widely deployed, and completely invisible to users.

$ cat ai-recommendation-poisoning-seo-for-your-brain.mdx →
echoleak-when-ai-agents-become-double-agents.mdx
Feb 18, 2026 by Rogue Security Research

EchoLeak: When AI Agents Become Double Agents

CVE-2025-32711 demonstrates the first zero-click attack against an enterprise AI agent. No clicks, no interaction - just a hidden instruction in an email, and your Copilot becomes an insider threat.

$ cat echoleak-when-ai-agents-become-double-agents.mdx →
42000-exposed-agents-mass-compromise.mdx
Feb 16, 2026 by Rogue Security Research

42,000 Exposed Agents: Anatomy of the First Agentic AI Mass Compromise

SecurityScorecard's STRIKE team found 42,900 AI agent instances exposed to the internet - 15,200 vulnerable to remote code execution. Nation-state actors are already hunting. Here's what this means for every organization deploying autonomous AI.

$ cat 42000-exposed-agents-mass-compromise.mdx →
mcp-supply-chain-the-attack-surface-hiding-in-your-ai-stack.mdx
Feb 16, 2026 by Rogue Security Research

MCP Supply Chain: The Attack Surface Hiding in Your AI Stack

A study of 1,899 MCP servers found 7.2% contain security vulnerabilities. Every MCP server you connect is now part of your supply chain - and most organizations aren't treating them that way.

$ cat mcp-supply-chain-the-attack-surface-hiding-in-your-ai-stack.mdx →
anthropic-sabotage-report-agents-need-runtime-security.mdx
Feb 13, 2026 by Rogue Security Research

Anthropic Just Proved Why Your Agents Need Runtime Security

Anthropic's 53-page Sabotage Risk Report for Claude Opus 4.6 documents exactly what we've been warning about: AI agents can covertly undermine your systems while appearing to work normally.

$ cat anthropic-sabotage-report-agents-need-runtime-security.mdx →
when-your-calendar-becomes-a-backdoor.mdx
Feb 12, 2026 by Rogue Security Research

When Your Calendar Becomes a Backdoor: The Claude Desktop Extensions Zero-Click RCE

A single Google Calendar event can silently compromise 10,000+ systems running Claude Desktop Extensions. The CVSS 10.0 vulnerability exposes a fundamental flaw in MCP architecture - and Anthropic says it's not their problem.

$ cat when-your-calendar-becomes-a-backdoor.mdx →
llms-cant-keep-secrets.mdx
Feb 10, 2026 by Rogue Security Research

LLMs Can't Keep Secrets - And That's a Feature, Not a Bug

A security researcher broke an LLM's secret-keeping in 7 hours using side channels. Here's why this isn't fixable, and what it means for agentic AI security.

$ cat llms-cant-keep-secrets.mdx →
sandbox-illusion-workflow-automation-attack-surface.mdx
Feb 9, 2026 by Rogue Security Research

The Sandbox Illusion: Why Workflow Automation Is 2026's Biggest Agentic Attack Surface

12 CVEs disclosed in n8n this week prove that low-code workflow platforms are agentic infrastructure with broken sandboxes. When automation engines can execute arbitrary code, TypeScript safety is just a compile-time dream.

$ cat sandbox-illusion-workflow-automation-attack-surface.mdx →
owasp-top-10-agentic-ai-2026-guide.mdx
Feb 8, 2026 by Rogue Security Research

OWASP Top 10 for Agentic AI (2026): The Complete Security Guide

Master the OWASP Top 10 for Agentic Applications - the definitive security framework for AI agents. Learn each risk, real attack scenarios, and practical mitigations for securing autonomous AI systems in production.

$ cat owasp-top-10-agentic-ai-2026-guide.mdx →
agent-lateral-movement-pivot-point.mdx
Feb 5, 2026 by Rogue Security Research

The Lateral Movement Problem: When Every AI Agent Becomes a Pivot Point

Three incidents in one week prove that agent-to-agent communication is the most dangerous attack surface in enterprise AI. Moltbook, BodySnatcher, and Copilot Connected Agents show why lateral movement between AI agents is 2026's defining security crisis.

$ cat agent-lateral-movement-pivot-point.mdx →
human-in-the-loop-is-broken.mdx
Feb 3, 2026 by Rogue Security Research

The Human-in-the-Loop Is Broken: How AI Attacks Weaponize Trust

When employees execute the breach thinking they're following orders - why traditional verification no longer works, and what the 8% unknown-compromise rate tells us about agentic AI security.

$ cat human-in-the-loop-is-broken.mdx →
owasp-top-10-agentic-ai.mdx
Feb 2, 2026 by Rogue Security Research

The OWASP Top 10 for Agentic AI: What Security Teams Need to Know

A practitioner's guide to the OWASP Top 10 for Agentic Applications (2026) - the new security framework for autonomous AI systems that act, not just answer.

$ cat owasp-top-10-agentic-ai.mdx →
pdf-that-owned-your-infrastructure.mdx
Feb 2, 2026 by Rogue Security Research

The PDF That Owned Your Infrastructure

Anatomy of an agentic email attack - how a single document compromises autonomous AI systems in 93 seconds, and why every layer of your security stack misses it.

$ cat pdf-that-owned-your-infrastructure.mdx →
hello-world.mdx
Jan 15, 2026 by Rogue Security Team

Why AI Agent Security Is the Next Frontier

Traditional security tools weren't built for autonomous agents. Here's what changes when your software starts making its own decisions.

$ cat hello-world.mdx →