Pipeline Active
Last: 21:00 UTC|Next: 03:00 UTC
← Back to Insights

AI Democratized Offense Before Defense: An Amateur Achieved State-APT Scale

A single low-skill operator compromised 600+ FortiGate devices across 55 countries in 38 days using DeepSeek and Claude. Simultaneously, the Cline supply chain attack and Claude Code vulnerabilities prove that AI amplifies offense by automating technical bottlenecks while defensive bottlenecks remain human-dependent. The asymmetry is temporal: offensive AI is deployed and maturing now; defensive AI is 12-24 months from enterprise deployment.

offensive-aisecuritythreat-intelligenceasymmetryfortigate6 min readFeb 26, 2026

# AI Democratized Offense Before Defense: An Amateur Achieved State-APT Scale

## Key Takeaway

AI augmentation provides a larger multiplier for offensive operations than defensive ones. A single low-skill operator with API access to frontier LLMs achieved operational scale (600+ devices, 55 countries, 38 days) that would have "previously required a significantly larger and more skilled team" according to AWS. The offense-defense asymmetry is not about novel capabilities—it is about scale multipliers. Offensive AI automates the technical bottleneck; defensive AI must change organizational behavior, which AI cannot yet do. This asymmetry is temporary but severe: the transition period (now through 2028) favors attackers.

## The FortiGate Case: One Amateur, 600 Devices, 55 Countries

[AWS Threat Intelligence documented the operational scale achieved by a single financially motivated operator](https://aws.amazon.com/blogs/security/ai-augmented-threat-actor-accesses-fortigate-devices-at-scale/) who was explicitly assessed as having "low-to-average" technical skill. This was not a state-sponsored APT team. This was one person or a small group using AI to multiply their capability.

### The Attack Timeline and Scale

  • 38 days of operations
  • 600+ FortiGate devices compromised
  • 55 countries targeted
  • 2,516 targets processed in a single containerized batch run across 106 countries
  • Post-exploitation achieved: DCSync attacks extracting complete NTLM credential databases from multiple Active Directory environments

The attacker evolved their tooling maturation within 2 months:

  • December 2025: Semi-manual operations using HexStrike
  • February 2026: Fully automated ARXON/CHECKER2 pipeline with minimal human approval

This 2-month maturation from semi-manual to fully automated mirrors legitimate DevOps automation but applied to offensive operations.

### How AI Amplified Offense

[DeepSeek's specific role in the pipeline](https://gbhackers.com/deepseek-and-claude-ai/):

  1. Ingested FortiGate backup configurations → structured attack plans
  2. Identified high-value targets → Oracle databases, biometric devices, Domain Admin paths
  3. Prioritized exploitation paths → optimal sequences for privilege escalation

[Claude's role as coding agent](https://aws.amazon.com/blogs/security/ai-augmented-threat-actor-accesses-fortigate-devices-at-scale/):

  1. Generated vulnerability assessment reports during live intrusions
  2. Automated exploitation scripts (Impacket, Metasploit modules)
  3. Generated hash-cracking utilities with "minimal human approval"
  4. Enabled parallel VPN scanning across containerized batches

The custom ARXON MCP server orchestrated the entire pipeline. The attacker achieved what AWS assessed would have "previously required a significantly larger and more skilled team" through three specific mechanisms:

  1. Automation of reconnaissance (DeepSeek analyzing configs)
  2. Automation of exploit generation (Claude coding agent)
  3. Parallel execution at scale (containerized batch processing)

## The Cline Supply Chain Attack: Automated Attack, Manual Defense

[The Cline CLI compromise](https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/) reveals a deeper asymmetry: offensive operations can be fully automated while defensive responses remain fundamentally manual.

### Offensive Side (Automated)

  1. Prompt injection against AI triage bot in CI/CD
  2. GitHub Actions cache poisoned with stolen credentials
  3. Malicious package published to npm registry
  4. Auto-update mechanism silently deployed compromise to ~4,000 machines in 8 hours

Total offensive effort: Understanding one novel technique (prompt injection against AI bot) and automating known attack methods.

### Defensive Side (Manual)

  1. Snyk discovers the compromise
  2. Cline team rotates credentials (initial rotation incomplete)
  3. Cline publishes remediation version
  4. npm revokes the malicious package
  5. Users must manually update or are at risk
  6. Organizations must identify affected developers and rotate their credentials

Each defensive step required human judgment, coordination, and error-correction (the initial credential rotation failed). The offensive operation required no human interaction after the initial deployment.

## Claude Code Vulnerabilities: Zero-Interaction Attack Vector

[Check Point Research disclosed CVE-2026-21852](https://research.checkpoint.com/2026/rce-and-api-token-exfiltration-through-claude-code-project-files-cve-2025-59536/): API key exfiltration before any user trust dialog. The attack requires:

Attacker effort: Craft a malicious `.claude/settings.json` file in a repository

Defender effort: 1. Patch Claude Code (requires release cycle) 2. Audit all repositories for malicious configurations 3. Rotate all potentially compromised API keys 4. Identify all processes that might have used compromised keys 5. Update organizational policies for configuration file reviews

The attack is instant; the defense requires weeks of organizational effort.

## The Asymmetry: What AI Automates vs What It Cannot

Offensive operations automated by AI: - Reconnaissance and target prioritization - Exploit code generation - Lateral movement planning - Post-exploitation automation - Credential management at scale

Defensive operations AI cannot automate: - Organizational change (implementing MFA, disabling exposed ports) - Human judgment about risk tolerance - Credential rotation procedures (high-touch, requires verification) - Policy creation and enforcement - Incident response coordination

The FortiGate attacker "consistently failed against hardened environments and moved on." This proves that the defensive bottleneck is organizational implementation (MFA, port isolation, credential hygiene), not technical capability. AI cannot solve organizational change problems.

## The Evidence: Scale Multiplier for Single Operators

| Metric | Pre-AI Baseline | AI-Augmented | Multiplier | |--------|-----------------|-------------|----------| | Devices compromised | ~10-50 | 600+ | 12-60x | | Countries targeted | 1-5 | 55 | 11-55x | | Targets processed per batch | 10-100 | 2,516 | 25-250x | | Days to achieve scale | 180-365 | 38 | 5-10x faster | | Required team size | 10+ | 1-3 | 3-10x smaller |

The per-target cost of attack approaches zero when automated. This fundamentally changes the target selection calculus: previously unprofitable targets (mid-market organizations, specific verticals) become economically viable.

## Defensive AI: Still in Development Phase

[Goodfire's 50% hallucination reduction](https://www.goodfire.ai/research/interpretability-for-alzheimers-detection) and interpretability tools represent early-stage defensive AI capabilities. Theoretical potential:

  • Detect anomalous model behavior (feature-level intrusion detection)
  • Identify adversarial prompts before execution
  • Reduce false positives in security alerts

Current reality: These tools are 12-24 months from enterprise deployment. [Interpretability research is shifting from ambitious mechanistic understanding to pragmatic enterprise tools](https://www.lesswrong.com/posts/jGuayXZo2sDnzwvRR/interpretability-research-update), meaning the defensive tools being developed now are optimized for compliance, not for detecting adversarial AI use.

The temporal gap between offensive AI deployment (now) and defensive AI maturity (2027-2028) is the strategic risk window.

## The Organizational Implementation Problem

The FortiGate attacker's failure against hardened environments proves that the asymmetry is structural, not temporary. Hardened environments implement:

  1. Multi-factor authentication (MFA)
  2. Management port isolation (not exposed to internet)
  3. Principle of least privilege (restricted credentials)
  4. Network segmentation (limited lateral movement)

These defenses are not novel. They are well-understood. Yet 71% of organizations are unprepared for agentic AI security (Dark Reading 2026 survey). The gap is not capability—it is organizational will.

  • Mandate MFA adoption across 1,000 employees
  • Change legacy systems to isolate management interfaces
  • Enforce credential hygiene policies
  • Coordinate incident response across business units

## Security Team Implications: Raising the Floor vs Detecting Attacks

The asymmetry creates a strategic shift in defensive priorities:

Don't: Invest primarily in AI-powered detection systems that try to identify sophisticated attacks. The FortiGate case proves attackers will just move on to easier targets.

Do: Invest in raising the security baseline across the entire organization: 1. MFA is mandatory, not optional 2. Management ports are never internet-exposed 3. Credential management is automated and audited 4. Network segmentation makes lateral movement expensive

AI-augmented attackers target the easiest vulnerabilities at massive scale. The defense is not smarter detection—it is reducing the number of easy targets available.

## What This Means for Practitioners

For security teams:

  1. Assume AI-augmented attackers operate at 10-50x the scale of traditional operators
  2. Focus on floor-raising, not threat detection: The basic hygiene (MFA, credential isolation, port isolation) provides asymmetric defensive value
  3. Prepare for volume: A single attacker can now compromise hundreds of devices. Incident response must be automated and scalable
  4. Timeline is urgent: The window until defensive AI reaches parity (2027-2028) is when AI-augmented attacks will be most effective

For ML engineers building security tools:

  1. Offensive AI is automated; defensive responses should be too
  2. Focus on automating organizational change, not just detection
  3. Integration into human workflows is critical: Defensive AI that generates alerts humans must interpret will lose to automated attacks

For IT leadership:

  1. Budget for MFA, port isolation, and credential management NOW
  2. Assume legacy exceptions ("we need port 8443 open for this legacy app") create liability in an AI-augmented threat landscape
  3. Incident response playbooks must assume 10x volume and 50% compression in response time

The FortiGate case proves that the age of the lone genius attacker is not coming back. The age of the AI-augmented amateur with state-APT-level scale is already here.

Share

Cross-Referenced Sources

7 sources from 1 outlets were cross-referenced to produce this analysis.