Coming Soon

Your agents won't go rogue for much longer...

Privacy Terms © 2026 Rogue Security
▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close
← Back to blog
February 26, 2026 by Rogue Security Research
offensive-aiMCPARXONFortiGatethreat-intelligenceASI04ASI05kill-chainAI-augmented-attacks

ARXON: When Your Adversary Has an AI Agent Too

We spend a lot of time thinking about how to protect AI agents from attacks. Prompt injection defenses. Memory integrity checks. Tool permission boundaries.

Meanwhile, a Russian-speaking threat actor just demonstrated what happens when the attacker builds their own AI agent first.

Between January 11 and February 18, 2026, Amazon Threat Intelligence tracked a campaign that compromised over 600 FortiGate devices across 55 countries. No zero-days. No sophisticated exploits. Just exposed management ports, weak credentials - and an AI-powered attack infrastructure that let a single operator work at the scale of a full red team.

600+
Devices Compromised
55
Countries Affected
2
LLM Providers Used
1
Operator (Estimated)
“It’s like an AI-powered assembly line for cybercrime, helping less skilled workers produce at scale.”
CJ Moses, CISO of Amazon Integrated Security

The Exposed Infrastructure

The discovery came through routine threat intelligence operations - and the attacker’s own operational security failures.

A misconfigured server at 212.11.64[.]250 was found hosting over 1,400 files across 139 subdirectories: CVE exploit code, stolen FortiGate configurations, Nuclei scanning templates, Veeam credential extraction tools, BloodHound collection data, and - critically - the full source code and operational logs of a custom Model Context Protocol server named ARXON.

A historical review revealed a previous exposure in December 2025 containing similar tooling alongside victim data from a major Asian media company. Between December and February, the attacker evolved from using HexStrike (an open-source offensive AI framework) to a fully custom toolkit.

Eight weeks. From off-the-shelf to bespoke AI-augmented attack infrastructure.

December 2025
First Exposure
Server hosts HexStrike (open-source offensive MCP), BloodHound data, and Claude Code settings pre-approving execution of Impacket, Metasploit, and hashcat.
January 11, 2026
Campaign Begins
Systematic scanning of FortiGate management interfaces begins. Custom ARXON MCP server and CHECKER2 orchestrator deployed.
February 18, 2026
600+ Devices Compromised
Amazon Threat Intelligence completes analysis. Server contains attack plans, victim configs, and source code for entire toolkit.

The AI-Powered Kill Chain

What makes this campaign significant isn’t any single technique. Every tool observed - BloodHound, Nuclei, Impacket - is well-documented. What sets this activity apart is the integration of LLMs at every stage of the kill chain.

ARXON Attack Chain - AI Integration Points
1
Mass Scanning
CHECKER2 orchestrates parallel VPN scanning across 2,516 targets in 106 countries via Docker containers
2
Credential-Based Access
Authentication attempts using commonly reused credentials against exposed management ports (443, 8443, 10443, 4443)
3
Config Extraction
Full device configurations extracted - SSL-VPN users, LDAP settings, network topology, encrypted passwords
4
ARXON Analysis
Recon data fed to MCP server - DeepSeek generates attack plans with prioritized targets and estimated success rates
AI-Powered
5
Claude Code Exploitation
Claude’s coding agent produces vulnerability assessments during live intrusions and executes offensive tools
AI-Powered
6
Domain Compromise
DCSync attacks extract NTLM hashes, lateral movement via pass-the-hash, NTLM relay attacks
7
Backup Targeting
Veeam servers targeted for credential extraction - positioning for ransomware deployment

Inside ARXON: The Offensive MCP

The ARXON Model Context Protocol server is the backbone of this operation. It serves a dual role:

As an analysis platform: ARXON ingests per-target reconnaissance data, calls DeepSeek to generate attack plans, and stores results in a persistent knowledge base that grows with each target. Every compromised device makes the next attack more informed.

As a toolkit: ARXON contains scripts to directly modify victim infrastructure - batch SSH-based FortiGate VPN account creation, user provisioning, and automated Domain Admin credential validation.

ARXON MCPPython

Model Context Protocol server that bridges LLMs to the intrusion workflow. Processes scan results, invokes DeepSeek for attack planning, maintains growing knowledge base across targets, and hosts scripts for modifying victim infrastructure.

CHECKER2Go

Docker-based orchestrator for parallel VPN scanning and target processing. Ingests stolen VPN configs, attempts connections, scans internal networks, and passes results to ARXON. Processed 2,516 targets across 106 countries in parallel batches.

The most revealing artifact was a Claude Code settings file from the December exposure. It pre-approved Claude to autonomously execute:

Claude Code settings.local.json - Pre-Approved Tools
// Impacket suitesecretsdump.pypsexec.pywmiexec.py// Exploitation frameworksMetasploithashcat// Hardcoded credentials for Asian media company[REDACTED]
The Critical Distinction

This isn’t Claude Code being jailbroken. This is an attacker configuring their own Claude Code instance for offensive operations. The settings file legitimately grants execution permissions - because the attacker controls the environment. No guardrails to bypass when you’re the administrator.

The Dual-Model Workflow

Amazon Threat Intelligence identified the attacker using multiple AI services in complementary roles:

DeepSeek - Attack Planning
[PLAN] Generates comprehensive attack methodologies
[PLAN] Step-by-step exploitation instructions
[PLAN] Expected success rates and time estimates
[PLAN] Prioritized task trees from recon data
[PLAN] References academic research on offensive AI
Claude - Execution and Analysis
[EXEC] Produces vulnerability assessments during intrusions
[EXEC] Executes offensive tools on victim systems
[EXEC] Generates custom reconnaissance tooling
[EXEC] Assists pivoting within compromised networks
[EXEC] Documents operational status in real-time

In one observed instance, the attacker submitted the complete internal topology of an active victim - IP addresses, hostnames, confirmed credentials, and identified services - and requested a step-by-step plan to compromise additional systems they couldn’t access with existing tools.

The AI produced technically accurate command sequences. What the attacker couldn’t do was adapt when conditions differed from the plan.

The Skill Gap Pattern

This is where the campaign reveals something important about AI-augmented threats. Amazon’s assessment:

Threat Actor Profile

Skill level: Low-to-medium baseline technical capability, significantly augmented by AI. The actor can run standard offensive tools and automate routine tasks but struggles with exploit compilation, custom development, and creative problem-solving during live operations.

Key finding: The threat actor largely failed when attempting to exploit anything beyond the most straightforward, automated attack paths. Their documentation records repeated failures: targeted services were patched, required ports were closed, vulnerabilities didn’t apply.

Rather than persisting against hardened targets, the attacker moved on to softer victims. AI augmentation provided scale and efficiency - not deeper technical skill.

The attack plans reference academic research on offensive AI agents. The attacker is following emerging literature on AI-assisted penetration testing. But when the AI’s output doesn’t work, they can’t debug it.

This is the AI-augmented threat model: not supervillain hackers with AI superpowers, but average hackers with AI multipliers.

Geographic Impact

The campaign’s targeting was opportunistic rather than sector-specific - consistent with automated mass scanning for vulnerable appliances.

Compromised Device Clusters by Region
South Asia
High
Latin America
Medium
West Africa
Medium
Southeast Asia
Medium
Northern Europe
Low
Caribbean
Low

Confirmed compromises include an industrial gas company in Asia-Pacific, a telecom provider in Turkey, and a major media company. Additional reconnaissance targeted organizations in South Korea, Egypt, Vietnam, and Kenya, with code specifically developed for a medical equipment manufacturer.

What This Means for Defenders

The ARXON campaign validates several predictions we’ve been tracking:

1. Offensive MCP is here. We’ve written extensively about MCP security risks from a defensive perspective. ARXON demonstrates that attackers are building their own MCP infrastructure - not to compromise your agents, but to power their own.

2. The Promptware Kill Chain is operational. Bruce Schneier’s promptware framework described AI-augmented attack chains in theory. This campaign shows them in practice: reconnaissance feeding to LLMs, LLMs generating attack plans, attack plans executed automatically, results feeding back to LLMs.

3. AI democratizes offense faster than defense. A single operator achieved scale that “would have previously required a significantly larger and more skilled team.” The asymmetry favors attackers who can adopt new tools without procurement cycles, compliance reviews, or change management.

ASI01ASI02ASI03ASI04 - Supply ChainASI05 - Code ExecutionASI06ASI07ASI08 - Cascading FailuresASI09ASI10

The Defense Paradox

Here’s the uncomfortable reality: the attacker succeeded not through AI sophistication but through fundamental security gaps.

  • Exposed management interfaces
  • Weak credentials
  • Single-factor authentication
  • Password reuse between VPN and domain accounts

AI didn’t enable novel attacks. AI enabled old attacks at new scale.

”No exploitation of FortiGate vulnerabilities was observed - instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale.”
Amazon Threat Intelligence

The attacker’s operational notes acknowledge that key infrastructure targets were “well-protected” with “no vulnerable exploitation vectors.” When they encountered hardened environments, they moved on.

Strong fundamentals still work. The organizations that weren’t compromised weren’t running next-generation AI defense systems. They were running basic hygiene: patched devices, strong credentials, MFA, network segmentation.

Immediate Actions

01
Audit Edge Device Exposure
Ensure management interfaces for FortiGate, Palo Alto, and other perimeter devices are not exposed to the internet. If remote administration is required, restrict access to known IP ranges or use a bastion host.
02
Enforce MFA Everywhere
Implement multi-factor authentication for all administrative and VPN access. This single control would have blocked the credential-based access that enabled this entire campaign.
03
Eliminate Password Reuse
Audit for password reuse between FortiGate VPN credentials and Active Directory domain accounts. Enforce unique, complex passwords for all accounts - especially Domain Administrator.
04
Monitor for Post-Exploitation
Watch for DCSync operations (Event ID 4662), new scheduled tasks mimicking Windows services, unusual remote management connections from VPN pools, and LLMNR/NBT-NS poisoning artifacts.
05
Protect Backup Infrastructure
Isolate Veeam and other backup servers from general network access. Review and rotate service account credentials. Backup compromise positions attackers to destroy recovery capabilities before ransomware deployment.
06
Anticipate AI-Augmented Volume
Traditional threat models assumed attacker constraints on time and skill. AI removes both. Expect higher volumes of technically adequate attacks against your entire attack surface simultaneously.

The Bigger Picture

The ARXON campaign is a preview of the threat landscape we’re entering. Not AI systems being attacked - AI systems doing the attacking.

The attacker didn’t need to jailbreak anyone’s AI. They configured their own. They didn’t need to bypass guardrails. They’re the administrator. They didn’t need sophisticated exploits. They had AI to parallelize basic ones.

The Strategic Implication

Every conversation about “AI safety” has focused on preventing AI systems from being misused or manipulated. The ARXON campaign shows the simpler path: threat actors building their own AI systems designed for offense from the ground up. No jailbreak required. No guardrails to bypass. Just capable tools in adversarial hands.

The dual-model approach observed - using whichever model is most permissive or capable for a given task - is likely to become a recurring pattern. Attackers will comparison-shop across AI providers the same way they comparison-shop across bulletproof hosting providers.

Language models only assisted a low-to-average skilled actor in removing the constraint on how many targets one person can work at any given time. That’s not a minor efficiency gain. That’s a fundamental shift in the economics of cybercrime.

Matching the speed at which this workflow moved will be important in defending networks as AI continues to be integrated into offensive operations.

The question isn’t whether your adversaries will have AI agents. It’s whether your defenses assume they already do.


This analysis is based on public research from Amazon Threat Intelligence and Cyber and Ramen. Indicators of compromise and additional technical details are available in the original reports.


Rogue Security builds runtime behavioral security for agentic AI - detecting both defensive gaps that AI-augmented attackers exploit and offensive AI patterns in your environment. Learn more at rogue.security.