Skip to content
Security Breaches

Russian Hacker Uses AI Chatbots to Breach 600+ Enterprise Firewalls in 55 Countries

Hacker at a computer screen in a dark room

Russian Hacker Uses AI Chatbots to Breach 600+ Enterprise Firewalls in 55 Countries

A Russian-speaking threat actor used commercial generative AI tools to compromise more than 600 FortiGate firewall devices across 55 countries, demonstrating how artificial intelligence is dramatically lowering the barrier to entry for large-scale cyberattacks. The campaign, which ran from January 11 to February 18, 2026, exploited basic security weaknesses rather than sophisticated technical vulnerabilities—allowing a “low-to-medium-skilled” attacker to operate at a scale previously requiring nation-state resources.

What Happened: AI-Powered Firewall Compromises at Scale

Amazon Threat Intelligence revealed the extensive hacking campaign, which targeted Fortinet’s FortiGate security appliances—devices used by enterprises worldwide to manage network traffic and secure remote access. The attacker didn’t exploit unknown vulnerabilities or zero-day flaws. Instead, they leveraged AI tools to automate attacks against systems with exposed management ports and weak, single-factor authentication.

The threat actor systematically scanned for FortiGate management interfaces exposed to the internet across ports 443, 8443, 10443, and 4443, then attempted authentication using commonly reused credentials. Once inside, they extracted complete device configurations containing passwords, network topology information, and security settings—data that enabled deeper penetration into corporate networks.

CJ Moses, Chief Information Security Officer of Amazon Integrated Security, stated: “No exploitation of FortiGate vulnerabilities was observed—instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale.”

The AI Arsenal: How Generative AI Became a Hacking Force Multiplier

What distinguishes this attack is the threat actor’s heavy reliance on commercial generative AI services. The hacker used multiple AI tools as the backbone of their operation—one as the primary engine and a second as a fallback for pivoting within compromised networks. While Amazon didn’t disclose which specific AI platforms were used, researchers noted the attack methodology aligns with similar campaigns leveraging ChatGPT, Google Gemini, DeepSeek, and Microsoft Copilot.

The AI tools served multiple functions throughout the attack chain:

  • Attack Planning: AI-generated comprehensive attack plans and operational checklists found on the attacker’s infrastructure
  • Tool Development: Custom scripts for credential extraction, VPN automation, and mass scanning written with AI assistance
  • Code Generation: Reconnaissance tools written in both Go and Python showing “clear indicators of AI-assisted development,” including redundant comments that merely restate function logic
  • Documentation: Extensive Russian-language documentation with AI-generated operational procedures

Amazon described the entire operation as an “AI-powered assembly line for cybercrime.”

Geographic Impact: 55 Countries Affected

The campaign was truly global in scope. Compromised FortiGate clusters were identified across:

  • South Asia
  • Latin America and the Caribbean
  • West Africa
  • Northern Europe
  • Southeast Asia

The attacks were sector-agnostic, indicating automated mass scanning rather than targeted strikes against specific industries. This opportunistic approach allowed the threat actor to maximize their reach, compromising multiple devices within individual organizations.

Post-Exploitation: The Ransomware Preparations

Beyond initial firewall compromises, the attacker conducted extensive post-exploitation activities that suggest preparation for ransomware deployment:

  • Compromised multiple organizations’ Active Directory environments
  • Extracted complete credential databases
  • Targeted backup infrastructure—a hallmark of ransomware operations
  • Conducted reconnaissance using Nuclei vulnerability scanner
  • Maintained persistent access through stolen VPN configurations

The stolen network topology data provided roadmaps for lateral movement within victim environments, enabling the attacker to identify critical systems and plan extensive campaigns.

The “Zero-Knowledge” Attacker Phenomenon

Amazon’s analysis reveals a concerning trend: the emergence of the “zero-knowledge” attacker—individuals who lack traditional technical expertise but use AI tools to bridge their knowledge gaps. Researchers described the threat actor as having “limited technical capabilities” who overcame their constraints by relying on AI to implement various attack phases.

“They are likely a financially motivated individual or small group who, through AI augmentation, achieved an operational scale that would have previously required a significantly larger and more skilled team,” Moses explained.

Significantly, the attacker often abandoned hardened targets. When encountering patched systems or basic defensive controls, they would drop the target and move to softer victims—behavior indicating AI enabled them to pursue quantity over sophistication.

AI Safety Bypass: The “Immersive World” Jailbreak

The FortiGate campaign may be connected to a related threat actor dubbed “C.R.A.B.,” documented by Cato Networks. This Russian-speaking hacker used a jailbreaking technique called “Immersive World” to bypass AI safety guardrails across ChatGPT, Gemini, DeepSeek, and Copilot.

Instead of requesting exploit code directly—which AI systems typically refuse—the attacker constructed elaborate fictional scenarios where AI models played “security expert” roles within fake companies. This role-playing approach tricked AI systems into providing functional attack code by framing queries as legitimate tasks within the fictional narrative.

Vitaly Simonovich, threat intelligence researcher at Cato Networks, warned this exemplifies how generative AI has lowered barriers for sophisticated cyberattacks that previously required extensive expertise.

Industry Response and Implications

This campaign represents a watershed moment for AI-augmented cybercrime. The Record noted that researchers have previously warned AI is reshaping cyberattack methodologies, with Google reporting in November 2025 that state-backed hackers were experimenting with malware capable of using large language models during execution.

Amazon’s findings suggest this threat is expanding beyond nation-state actors to financially motivated cybercriminals. The report warned: “Organizations should anticipate that AI-augmented threat activity will continue to grow in volume from both skilled and unskilled adversaries.”

What Organizations Must Do Now

The FortiGate compromises highlight fundamental security failures that AI has made easier to exploit at scale:

  • Disable exposed management interfaces—FortiGate management ports should never be internet-accessible
  • Enforce multi-factor authentication (MFA)—Single-factor authentication was a primary enabler of this campaign
  • Apply security patches promptly—Many exploited weaknesses had available fixes
  • Segment critical infrastructure—Prevent lateral movement from compromised edge devices
  • Monitor for AI-generated attack patterns—Security tools must evolve to detect AI-assisted reconnaissance

The attack also raises urgent questions about AI safety mechanisms. If a single unskilled actor can jailbreak multiple major AI platforms to generate attack code, the cybersecurity community must reconsider how AI guardrails are implemented and tested.

Conclusion: The Democratization of Cyberwarfare

This campaign demonstrates that AI is democratizing cyberattack capabilities previously limited to advanced persistent threat groups. A single hacker—or small group—leveraging readily available AI tools compromised 600+ enterprise security devices across 55 countries in just five weeks.

The implications extend beyond this specific incident. As generative AI becomes more sophisticated and accessible, the “zero-knowledge attacker” phenomenon will likely accelerate—enabling a new generation of cybercriminals to achieve outcomes that previously required nation-state resources.

For defenders, the message is clear: the threat landscape has fundamentally changed. Security strategies must now account for AI-augmented adversaries operating at unprecedented scale and speed.


Sources