42,000 Exposed Agents: Anatomy of the First Agentic AI Mass Compromise
On Monday, February 9th, SecurityScorecard’s STRIKE Threat Intelligence team published findings that should alarm every organization deploying autonomous AI systems: 42,900 agentic AI instances are directly exposed to the internet across 82 countries - and 15,200 of them are vulnerable to remote code execution.
These aren’t chatbots. They’re autonomous agents with standing access to shell commands, file systems, API credentials, messaging platforms, and cloud infrastructure. When one gets compromised, the attacker doesn’t just steal data - they inherit everything the agent could do.
Nation-state actors have already noticed. STRIKE found exposed instance IPs correlating with infrastructure attributed to Kimsuky, APT28, Salt Typhoon, Sandworm, and APT41.
This is the first mass compromise event in the agentic AI era.
Two Weeks That Changed Everything
The exposure crisis didn’t emerge slowly. It exploded over a two-week period as a viral open-source AI agent went from obscurity to 135,000 GitHub stars - faster than security practices could adapt.
The Dangerous Default
The root cause is deceptively simple: insecure default configurations.
By default, agentic AI frameworks bind to 0.0.0.0:18789 - listening on all network interfaces, including the public internet. The secure configuration would be 127.0.0.1:18789, restricting access to localhost only.
Most users lack the technical knowledge to change this setting after installation. They follow the quickstart guide, see the agent working, and move on. The exposure is invisible until an attacker finds the IP address.
What Attackers Inherit
Unlike traditional application compromises, a compromised AI agent gives attackers something far more valuable than static data: standing permissions to act.
The agent becomes what STRIKE calls a “digital agent” - an extension of the user’s identity with all associated permissions. Compromising the agent compromises the user. At scale, this means compromising entire organizations.
Three Critical Vulnerabilities
The exposure wouldn’t matter as much if the agents themselves were hardened. They weren’t.
STRIKE’s scan revealed that 78% of exposed instances still display old branding - indicating they’re running versions released before these patches. Only 22% have updated.
The attack chain is simple: find an exposed instance, send a crafted WebSocket message or malicious link, achieve code execution, persist via the agent’s own scheduled task infrastructure. The entire process takes seconds.
Nation-State Interest
The exposure data reveals something more concerning than opportunistic attacks: nation-state actors are actively hunting these systems.
STRIKE found correlations between exposed instance IPs and infrastructure previously attributed to five major threat actor groups:
The geographic distribution reinforces this concern. Of the 42,900 exposed instances:
Why would nation-state actors care about personal AI assistants? Because these aren’t just personal assistants anymore. Employees are connecting them to enterprise systems - email, calendars, documents, messaging platforms. An agent connected to corporate Slack or Google Workspace provides the same access as a compromised employee account, with better persistence and less visibility.
The Shadow AI Amplification
This is where the exposure crisis intersects with an existing enterprise security blind spot: shadow AI.
Reco Security’s analysis found that traditional security tools struggle to detect AI agent activity:
- Endpoint security sees processes running but doesn’t understand agent behavior
- Network tools see API calls but can’t distinguish legitimate automation from compromise
- Identity systems see OAuth grants but don’t flag AI agent connections as unusual
When employees connect personal AI tools to corporate SaaS applications - Slack, Google Workspace, Microsoft 365 - the agent gains access to everything those integrations can reach. Messages, files, emails, calendar entries, OAuth tokens that enable lateral movement.
Agentic AI frameworks feature persistent memory - they remember context across sessions. Any data the agent accesses remains available indefinitely. If the agent is compromised through a malicious skill, prompt injection, or vulnerability exploit, attackers inherit everything the agent has ever seen.
The ClawHavoc incident demonstrated how quickly this can scale. Of 2,857 skills in the public marketplace, researchers confirmed 341 were malicious - roughly 12% of the entire registry. These skills used professional documentation and innocuous names to appear legitimate, then instructed users to run external code installing keyloggers or credential stealers.
The Attack Chain in Practice
Here’s how the exposure translates to compromise:
The entire attack executes without triggering traditional security alerts. No malware signatures. No network anomalies. Just an agent doing what agents do - executing commands and accessing data - with the attacker at the controls.
OWASP Agentic Top 10 Mapping
The exposure crisis touches multiple categories in the OWASP Top 10 for Agentic Applications (2026):
ASI02 (Tool Misuse): Compromised agents use their legitimate tools - shell access, file operations, API calls - for malicious purposes. The tools work exactly as designed; the intent has changed.
ASI04 (Supply Chain): The ClawHavoc incident proved that agentic supply chains can be compromised at scale. 12% of a public skill registry was malicious.
ASI05 (Code Execution): Three CVEs leading to arbitrary code execution. When agents have shell access, every vulnerability is a potential RCE.
ASI10 (Rogue Agents): Once compromised, agents become persistent threat actors. They continue operating within their normal behavioral envelope while serving attacker objectives - making detection significantly harder than traditional intrusions.
Why This Matters Beyond This Incident
This isn’t a story about one vulnerable AI framework. It’s a preview of what happens when autonomous systems are deployed without operational security maturity.
Every agentic AI system shares the same fundamental architecture: standing permissions to tools, persistent state across sessions, autonomous execution without per-action approval. When one of these systems is exposed or compromised, the blast radius extends to everything it can reach.
The exposure problem will recur with every new agentic framework that prioritizes ease of deployment over secure defaults. The supply chain problem will recur with every marketplace that doesn’t verify package integrity. The nation-state interest will only increase as more valuable targets deploy AI agents connected to sensitive systems.
STRIKE emphasizes that agentic AI systems create persistent threats. Unlike one-time intrusions, compromised AI agents can be used as long-term attack nodes that repeatedly execute malicious tasks while appearing to be legitimate user activity. Traditional incident response - find the malware, remove it, rotate credentials - doesn’t account for an attacker embedded in an autonomous system that the user actively wants to keep running.
Immediate Actions
The Larger Lesson
Security researchers have been warning about agentic AI risks for over a year. The OWASP Agentic Top 10 codified the threat model. Academic papers documented the attack vectors. Red teams demonstrated the exploits.
None of it mattered until 42,900 agents showed up on Shodan.
The exposure crisis is a forcing function. It proves that agentic AI security is not theoretical. It proves that default configurations matter. It proves that supply chain integrity matters. It proves that nation-states are paying attention.
42,900 exposed instances. 15,200 vulnerable to remote code execution. 341 malicious packages in the supply chain. Five nation-state actors correlating with exposed infrastructure. This is what happens when autonomous AI systems are deployed without operational security maturity. The question for every organization is not whether you have agents in your environment - it’s whether you know where they are, what they can access, and who else might be controlling them.
STRIKE maintains a live dashboard at declawed.io tracking exposures globally, updated every 15 minutes. The number keeps climbing.
Rogue Security builds runtime behavioral security for agentic AI - detecting compromised agents, anomalous tool usage, and C2 patterns before they escalate to full infrastructure compromise. Learn more at rogue.security.