Logo

When AI hacks AI, the victims are still human

By Jorge Monteiro, CEO of Ethiack.

  • Tuesday, 3rd March 2026 Posted 1 hour ago in by Phil Alsop

For decades, enterprise cybersecurity was built around an axiom: users are human.

That assumption has been smashed as individuals and organisations introduce a new type of user - agentic AI - into their IT systems.

We are quickly moving beyond chatbots to autonomous “action agents”, AI systems that can log into platforms, manage subscriptions, process invoices, interact with SaaS tools and run operational workflows on behalf of employees.

Their purpose is efficiency. They remove manual work and automate routine processes. But from a security perspective, they fundamentally change what a user is.

The scale and speed of this change recently came to worldwide attention with the OpenClaw saga. Previously called Clawdbot, and then Moltbot, OpenClaw is an open source AI assistant that integrates with more than 50 popular apps. Once installed, it can access and run its owner’s email, social media and messaging apps, all with a breezily simple UX - you just “DM it like a friend.”

Sure enough, it quickly became hugely popular. But its huge power was soon revealed as a huge vulnerability. When Ethiack’s AI pentesting agent, Hackian, tested OpenClaw’s open-source code, it struck gold in less than two hours: it found a 0day, a previously unknown critical vulnerability that a cybercriminal could exploit to take over the account of anyone using OpenClaw - and with it all their connected apps. The security flaw was so serious it was given a ‘high severity’ score of 8.8 on the CVE register.

What if the weak link in your IT is an AI rather than a human?

Previously cybercriminals often targeted people - whether through phishing emails, malware or just plain human error - as a way to steal credentials and gain access.

But a hostile hacker who seizes control of an AI system like OpenClaw doesn’t need to break in, as the agent comes with a full set of keys.

More worrying still, an AI agent that has been captured by a threat actor is likely to go unnoticed. This is because the conventional wisdom - that users are human - has led many authentication and fraud detection systems to be predicated on how people usually behave. When activity deviates from the ‘normal’ pattern, red flags are raised and security teams investigate.

Security monitoring relies heavily on behavioural analysis, for instance, flagging unusual login locations, strange working hours or activity inconsistent with a user’s history.

But AI agents undermine these assumptions. Identity is no longer tied to a person, behaviour is no longer human, and when an AI agent is compromised, credentials are not stolen.

A captured AI system may operate continuously and at machine speed, processing thousands of actions per hour. Yet malicious prompts, poisoned data sources or compromised third party platforms could lead the agent to cause huge damage while still operating within its authorised parameters.

From the perspective of identity verification and audit logs, everything will appear routine; if no employee account has been compromised, the activity may not be technically unauthorised.

While OpenClaw was created for individuals, an enterprise-level AI agent might have access to finance systems so it can pay invoices, reconcile accounts or manage subscriptions. In doing so, the company effectively creates a privileged operator, equipped with extensive delegated decision-making power - inside its environment.

The scale of this threat is set to grow rapidly.  Not just because of the increased adoption of AI in multiple business settings, but also because AI systems continuously interact with other automated services.

In the near future, cyberattacks made via a vulnerable AI may target workflows rather than people, with hackers who seize control of one AI agent able to influence multiple systems - all seemingly without making unauthorised access. At this stage, the challenge for cybersecurity teams will be about verifying rogue intent rather than detecting intrusion.

Threat actors are using AI to hack AI systems. The best defence? AI

The cybersecurity front line is seeing an unprecedented AI arms race. A 2025 report by the UK’s National Cyber Security Centre concluded that all types of cyber threat actor – state and non-state, skilled and less skilled – are routinely using AI tools to penetrate IT systems.

With AI agents now a key part of many organisations’ ‘attack surface’, AI systems are a crucial focus for cyber defence.

However, existing security models do not fully address this threat. Zero Trust architectures verify identity and device integrity, but a compromised AI agent will sail through those checks if it authenticates correctly and uses approved accounts.

A better telltale to watch for is not authentication, but authority - and whether an action should have been performed by a non-human actor at all.

Organisations need to adapt their security controls for an AI native future. AI agents should be treated as privileged identities, but with minimal permissions. Critical actions such as payments, supplier changes or access provisioning should still require a human validation failsafe.

Security monitoring should focus on what the agent does, not simply whether it is logged in correctly. Integrations between systems should be isolated, and organisations must be able to audit why an automated decision occurred, not just record that it did.

Cybersecurity teams should make use of AI tools as well. For example, Ethiack’s Hackian is a hackbot able to continuously scan vast attack surfaces, including AI agents, learning and locating potential weaknesses.

Used ethically, transparently and under human control, next-generation penetration testing can help organisations stay ahead of AI-enabled attackers, by finding vulnerabilities in their AI systems early and closing them fast.

The next wave of cyber incidents may not involve breached networks or stolen passwords. Instead, they could see trusted AI systems doing exactly what they were allowed to do, but not what the organisation intended.

Security has always been about trust. AI agents don’t remove that problem; instead they are moving the cybersecurity front line from the organisation’s outer wall to its operational core. Businesses are not just deploying software tools anymore, they are introducing non-human operators into the heart of their systems, and security strategies must evolve accordingly.

By Karthik SJ, General Manager of AI at LogicMonitor.
By Fiona Reid, Director of International Business at Pattern
By Samantha Jennings, Head of Operations, Avella.
By Kirsty Biddiscombe, EMEA Business Lead AI, ML & Data Analytics, NetApp.
By Vijay Narayan, EVP and Americas MLEU Business Unit Head at Cognizant.
By Frédéric Godemel, EVP Energy Management, Schneider Electric.