Cyber Security Blog

Stay ahead of the curve with industry trends, cutting edge tech and inventive strategies.

Autonomous Cyberattacks: A New Era of AI-Driven Attacks

What if a cyberattack didn’t need a team of people working around the clock… because the AI did most of the work instead?

In a recent report, Anthropic revealed they disrupted what they believe to be the first documented cyber espionage campaign largely executed by agentic AI at scale. Even more eye-opening? The AI carried out an estimated 80–90% of the tactical activity independently, at request rates that would be physically impossible for a human operator to match

This blog breaks down what happened, why it matters, and what Cyber Security leaders should take from it. We’ll walk through how AI supported the full attack lifecycle, from reconnaissance to data extraction, and what this shift means for the future of defence.

What Anthropic Discovered In The Cyber Attack:

In mid-September 2025, Anthropic detected and a highly sophisticated cyber espionage operation. This wasn’t a one-off attempt or a simple “spray and pray” attack. It was a professionally coordinated campaign designed to run multiple targeted intrusions at the same time.

Anthropic assesses with high confidence that the operation was carried out by a Chinese state-sponsored group they’ve designated GTG-1002. The campaign targeted roughly 30 entities, including major technology corporations and government agencies, and Anthropic’s investigation validated a handful of successful intrusions.

What makes this stand out is not just who was behind it or who was targeted. It’s how the campaign was executed. Anthropic’s findings suggest the attackers weren’t simply using AI for ideas or quick research. They were using it to actively drive the operation forward at scale, with minimal human involvement.

Why this is different: AI Wasn’t Advising. It Was Operating

In this campaign, Claude wasn’t being used as a helpful assistant sitting on the sidelines. Anthropic describes it as a “fundamental shift” because the threat actor manipulated Claude into acting like an autonomous cyber attack agent, carrying out intrusion activity rather than simply suggesting what to do next.

Even more striking is the operating model behind it. Anthropic’s analysis suggests the AI completed around 80–90% of the tactical work, while humans stayed in the background and stepped in mainly at key decision points.

Those human operators weren’t guiding every move. They were acting more like supervisors, responsible for:

Attacks like this don’t just move faster. They scale differently. This then means defenders may have less time to spot activity, validate it, and respond before the next phase begins.

How They Turned Claude Into an Attack Orchestrator

One of the most interesting parts of the report is that this campaign wasn’t powered by exotic malware or some never-before-seen hacking tool.

Instead, it relied on something far more scalable: orchestration.

Anthropic describes the threat actor as building an autonomous attack framework, using Claude Code as the system coordinating the operation. Rather than running one long, manual intrusion, the attacker broke the work into smaller technical tasks and had Claude handle them through separate sub-agents.

What The AI Sub-Agents Were Doing

According to the report, these sub-agents were used for tasks such as:

Behind the scenes, the framework acted like a control system. It kept track of what had already been discovered, what worked, and what needed to happen next. That meant the operation could keep moving forward without a human needing to manually direct every step.

The clever part is how “normal” each task could look in isolation. The report explains that the threat actor used carefully crafted prompts and personas, presenting activity as routine technical requests, which helped mask the bigger malicious goal.

It was reported that the AI was able to operate across these environments:

Claude would complete one task, report back, and then the next step could be triggered based on the results.

The Attack Lifecycle: What Did The AI Actually Do?

This campaign didn’t use AI in one isolated part of the attack. Anthropic’s report shows Claude was used across the full lifecycle, with autonomy increasing as the operation progressed.

Phase 1: Target selection and roleplay to get Claude engaged

The campaign started with human input. Operators selected targets, then worked to convince Claude it was supporting legitimate defensive testing.

Phase 2: Reconnaissance and attack surface mapping

Once the target was set, Claude carried out reconnaissance with minimal guidance.

Phase 3: Vulnerability discovery and validation

The AI moved beyond mapping and into exploitation preparation.

Phase 4: Credential harvesting and lateral movement

After gaining access, Claude supported internal expansion.

Phase 5: Data extraction and intelligence categorisation

This is where the autonomy became even more obvious.

Phase 6: Documentation and handoff

Finally, Claude documented everything as it went.

The AI Wasn’t Perfect, But the Threat Still Scaled

One of the most important takeaways from Anthropic’s report is that even this level of AI-driven intrusion wasn’t flawless.

Claude didn’t always get things right. During autonomous operations, it:

At first glance, that might feel reassuring. If AI makes mistakes, surely that limits how effective it can be?

The issue is that these errors create friction, not failure.

In practice, it means attackers may need to:

But the campaign still demonstrated something that matters more. AI can operate at speed and scale across multiple phases of an intrusion, even with imperfections.

AI Defence Needs To Evolve Just As Fast

Anthropic makes an important point in their report. The same capabilities that allow AI to be misused in cyberattacks are also what make it valuable for Cyber Security defence.

As these threats evolve, security teams shouldn’t sit back and wait. Instead, Anthropic encourages organisations to start experimenting with AI in practical areas like:

This isn’t about replacing people or trusting AI blindly. It’s about building experience, understanding what works in your environment, and improving response speed when modern attacks move faster than ever.

Staying Ahead Of What’s Next

Anthropic’s report highlights a real shift in how cyber espionage campaigns can be executed, with AI moving from a supporting tool to something that can actively drive attack activity at scale.

The good news? This isn’t a reason to panic. It’s a reason to stay informed, keep testing your assumptions, and make sure your defences are built for speed as well as resilience.

If you’d like support strengthening your Cyber Security posture, validating your controls, or pressure-testing your environment, our experts at Equilibrium Security are here to help. Chat to us today on 0121 663 0055 or at enquiries@equilibrium-security.co.uk.

Ready to achieve your security goals? We’re at your service.

Whether you are a CISO, an IT Director or a business owner, Equilibrium has the
expertise to help you shape and deliver your security strategy.

About the author

Lucy Lawson is a Marketing Professional at Equilibrium Security, skilled in transforming complex Cyber Security challenges into clear, actionable advice. Her content is designed to guide your business in making informed Cyber Security decisions which follow best practice, ensuring your digital assets remain safe and secure.
Lucy Lawson
Marketing Executive

Latest posts