Coming Soon

Your agents won't go rogue for much longer...

Privacy Terms © 2026 Rogue Security
▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close
← Back to blog
February 16, 2026 by Rogue Security Research
agentic-securityexposureRCEASI02ASI04ASI05nation-stateshadow-aiinfrastructure

42,000 Exposed Agents: Anatomy of the First Agentic AI Mass Compromise

On Monday, February 9th, SecurityScorecard’s STRIKE Threat Intelligence team published findings that should alarm every organization deploying autonomous AI systems: 42,900 agentic AI instances are directly exposed to the internet across 82 countries - and 15,200 of them are vulnerable to remote code execution.

These aren’t chatbots. They’re autonomous agents with standing access to shell commands, file systems, API credentials, messaging platforms, and cloud infrastructure. When one gets compromised, the attacker doesn’t just steal data - they inherit everything the agent could do.

Nation-state actors have already noticed. STRIKE found exposed instance IPs correlating with infrastructure attributed to Kimsuky, APT28, Salt Typhoon, Sandworm, and APT41.

This is the first mass compromise event in the agentic AI era.

42,900
Instances Exposed
15,200
Vulnerable to RCE
82
Countries Affected
78%
Unpatched Versions

Two Weeks That Changed Everything

The exposure crisis didn’t emerge slowly. It exploded over a two-week period as a viral open-source AI agent went from obscurity to 135,000 GitHub stars - faster than security practices could adapt.

Jan 27-29
ClawHavoc
341 malicious skills distributed via the public marketplace - 12% of all packages. Keyloggers, credential stealers, backdoors disguised as productivity tools.
Jan 30
Silent Patch
One-click RCE vulnerability (CVE-2026-25253) patched before disclosure. Most users don’t update.
Jan 31
21K Exposed
Censys identifies 21,639 instances on the public internet. API keys, OAuth tokens, plaintext credentials leaking from misconfigured deployments.
Jan 31
Moltbook Breach
Agent social network exposes 35,000 emails and 1.5M API tokens. 770,000 active agents affected.
Feb 3
Full Disclosure
Three CVEs published. Attack chain takes milliseconds after visiting malicious webpage. CVSS 8.8.
Feb 9
STRIKE Report
SecurityScorecard reveals 42,900 exposed instances. Nation-state infrastructure correlation. The full scope becomes clear.

The Dangerous Default

The root cause is deceptively simple: insecure default configurations.

By default, agentic AI frameworks bind to 0.0.0.0:18789 - listening on all network interfaces, including the public internet. The secure configuration would be 127.0.0.1:18789, restricting access to localhost only.

Most users lack the technical knowledge to change this setting after installation. They follow the quickstart guide, see the agent working, and move on. The exposure is invisible until an attacker finds the IP address.

”The result: Over 40,000 agentic AI instances exposed to the internet. The danger is not what the agent says, but what it is allowed to do once an attacker gains access.”
SecurityScorecard STRIKE Team

What Attackers Inherit

Unlike traditional application compromises, a compromised AI agent gives attackers something far more valuable than static data: standing permissions to act.

What Agents Typically Hold
[KEY] LLM API keys (OpenAI, Anthropic, etc.)
[MSG] Messaging tokens (Slack, Discord, WhatsApp, Telegram)
[WH] Webhook secrets and integration credentials
[SSH] SSH keys and system access credentials
[AWS] Cloud provider access (AWS, GCP, Azure)
[WEB] Browser session data and cookies
What Attackers Can Do
+ Execute arbitrary commands on the host system
+ Read, write, and exfiltrate any accessible files
+ Impersonate users on connected platforms
+ Control authenticated browser sessions
+ Maintain persistence via built-in cron jobs
+ Pivot to other systems using cached credentials

The agent becomes what STRIKE calls a “digital agent” - an extension of the user’s identity with all associated permissions. Compromising the agent compromises the user. At scale, this means compromising entire organizations.

Three Critical Vulnerabilities

The exposure wouldn’t matter as much if the agents themselves were hardened. They weren’t.

CVE-2026-25253CVSS 8.8
One-Click Remote Code Execution
The Control UI trusts URL parameters without validation. Cross-site WebSocket hijacking enables RCE via a single malicious link - even on instances bound to localhost. Attack completes in milliseconds.
CVE-2026-25157CVSS 7.8
SSH Command Injection (macOS)
macOS companion app fails to sanitize SSH commands, allowing attackers to inject arbitrary shell commands through crafted connection strings.
CVE-2026-24763CVSS 8.8
Docker Sandbox Escape
PATH manipulation allows escaping the containerized execution environment. Attackers gain access to the host system from within sandboxed code execution.

STRIKE’s scan revealed that 78% of exposed instances still display old branding - indicating they’re running versions released before these patches. Only 22% have updated.

The attack chain is simple: find an exposed instance, send a crafted WebSocket message or malicious link, achieve code execution, persist via the agent’s own scheduled task infrastructure. The entire process takes seconds.

Nation-State Interest

The exposure data reveals something more concerning than opportunistic attacks: nation-state actors are actively hunting these systems.

STRIKE found correlations between exposed instance IPs and infrastructure previously attributed to five major threat actor groups:

[NK]
Kimsuky
North Korea
[RU]
APT28
Russia / GRU
[CN]
Salt Typhoon
China / MSS
[RU]
Sandworm
Russia / GRU Unit 74455
[CN]
APT41
China / State-affiliated

The geographic distribution reinforces this concern. Of the 42,900 exposed instances:

Exposed Instances by Region
Alibaba Cloud
45%
China (Total)
37%
United States
28%
Europe
18%
Other (80 countries)
12%

Why would nation-state actors care about personal AI assistants? Because these aren’t just personal assistants anymore. Employees are connecting them to enterprise systems - email, calendars, documents, messaging platforms. An agent connected to corporate Slack or Google Workspace provides the same access as a compromised employee account, with better persistence and less visibility.

The Shadow AI Amplification

This is where the exposure crisis intersects with an existing enterprise security blind spot: shadow AI.

Reco Security’s analysis found that traditional security tools struggle to detect AI agent activity:

  • Endpoint security sees processes running but doesn’t understand agent behavior
  • Network tools see API calls but can’t distinguish legitimate automation from compromise
  • Identity systems see OAuth grants but don’t flag AI agent connections as unusual

When employees connect personal AI tools to corporate SaaS applications - Slack, Google Workspace, Microsoft 365 - the agent gains access to everything those integrations can reach. Messages, files, emails, calendar entries, OAuth tokens that enable lateral movement.

The Persistent Memory Problem

Agentic AI frameworks feature persistent memory - they remember context across sessions. Any data the agent accesses remains available indefinitely. If the agent is compromised through a malicious skill, prompt injection, or vulnerability exploit, attackers inherit everything the agent has ever seen.

The ClawHavoc incident demonstrated how quickly this can scale. Of 2,857 skills in the public marketplace, researchers confirmed 341 were malicious - roughly 12% of the entire registry. These skills used professional documentation and innocuous names to appear legitimate, then instructed users to run external code installing keyloggers or credential stealers.

The Attack Chain in Practice

Here’s how the exposure translates to compromise:

From Internet Scan to Full Compromise
1
Internet Reconnaissance
Attacker scans for port 18789, identifies exposed control interfaces
|
2
Version Fingerprinting
Branding string reveals version - 78% show pre-patch identifiers
|
3
Exploit Delivery
WebSocket hijack or malicious link triggers CVE-2026-25253
|
4
Code Execution
Arbitrary commands run in agent context - milliseconds to completion
|
5
Credential Harvesting
API keys, OAuth tokens, SSH keys extracted from agent configuration
|
6
Persistence Establishment
Attacker adds their own cron job using agent’s built-in scheduling
|
7
Lateral Movement
Harvested credentials enable pivot to connected systems, cloud accounts, messaging platforms

The entire attack executes without triggering traditional security alerts. No malware signatures. No network anomalies. Just an agent doing what agents do - executing commands and accessing data - with the attacker at the controls.

OWASP Agentic Top 10 Mapping

The exposure crisis touches multiple categories in the OWASP Top 10 for Agentic Applications (2026):

ASI01ASI02 - Tool MisuseASI03ASI04 - Supply ChainASI05 - Code ExecutionASI06ASI07ASI08ASI09ASI10 - Rogue Agents

ASI02 (Tool Misuse): Compromised agents use their legitimate tools - shell access, file operations, API calls - for malicious purposes. The tools work exactly as designed; the intent has changed.

ASI04 (Supply Chain): The ClawHavoc incident proved that agentic supply chains can be compromised at scale. 12% of a public skill registry was malicious.

ASI05 (Code Execution): Three CVEs leading to arbitrary code execution. When agents have shell access, every vulnerability is a potential RCE.

ASI10 (Rogue Agents): Once compromised, agents become persistent threat actors. They continue operating within their normal behavioral envelope while serving attacker objectives - making detection significantly harder than traditional intrusions.

Why This Matters Beyond This Incident

This isn’t a story about one vulnerable AI framework. It’s a preview of what happens when autonomous systems are deployed without operational security maturity.

The Architectural Pattern

Every agentic AI system shares the same fundamental architecture: standing permissions to tools, persistent state across sessions, autonomous execution without per-action approval. When one of these systems is exposed or compromised, the blast radius extends to everything it can reach.

The exposure problem will recur with every new agentic framework that prioritizes ease of deployment over secure defaults. The supply chain problem will recur with every marketplace that doesn’t verify package integrity. The nation-state interest will only increase as more valuable targets deploy AI agents connected to sensitive systems.

STRIKE emphasizes that agentic AI systems create persistent threats. Unlike one-time intrusions, compromised AI agents can be used as long-term attack nodes that repeatedly execute malicious tasks while appearing to be legitimate user activity. Traditional incident response - find the malware, remove it, rotate credentials - doesn’t account for an attacker embedded in an autonomous system that the user actively wants to keep running.

Immediate Actions

01
Audit Agent Deployments
Identify every agentic AI system in your environment - including shadow deployments by employees. Check binding configuration: anything on 0.0.0.0 is exposed. Run security audits where available.
02
Patch or Isolate
Update to latest versions immediately. If you can’t update, isolate the system behind VPN or Tailscale. Block port 18789 at the perimeter. Don’t expose agent control interfaces to the internet under any circumstances.
03
Rotate All Credentials
Treat any credentials configured in exposed agents as compromised. API keys, OAuth tokens, SSH keys - rotate everything. Assume attackers have harvested them.
04
Monitor for C2 Patterns
Compromised agents exhibit command-and-control behavior: unusual outbound connections, scheduled tasks created post-deployment, lateral API calls to connected services. Update threat models to include agent compromise.
05
Audit Connected Services
Review OAuth grants to AI agent applications in your SaaS environment. Identify which users have connected agents to corporate systems. Revoke permissions for unmanaged deployments.
06
Implement Runtime Monitoring
Traditional security tools don’t understand agent behavior. Deploy behavioral monitoring that can detect anomalous tool usage, unexpected credential access, and deviation from baseline patterns.

The Larger Lesson

Security researchers have been warning about agentic AI risks for over a year. The OWASP Agentic Top 10 codified the threat model. Academic papers documented the attack vectors. Red teams demonstrated the exploits.

None of it mattered until 42,900 agents showed up on Shodan.

The exposure crisis is a forcing function. It proves that agentic AI security is not theoretical. It proves that default configurations matter. It proves that supply chain integrity matters. It proves that nation-states are paying attention.

The Bottom Line

42,900 exposed instances. 15,200 vulnerable to remote code execution. 341 malicious packages in the supply chain. Five nation-state actors correlating with exposed infrastructure. This is what happens when autonomous AI systems are deployed without operational security maturity. The question for every organization is not whether you have agents in your environment - it’s whether you know where they are, what they can access, and who else might be controlling them.

STRIKE maintains a live dashboard at declawed.io tracking exposures globally, updated every 15 minutes. The number keeps climbing.


Rogue Security builds runtime behavioral security for agentic AI - detecting compromised agents, anomalous tool usage, and C2 patterns before they escalate to full infrastructure compromise. Learn more at rogue.security.