The rising threat of AI weaponisation in cybersecurity

AI's accelerated role in creating cyber threats necessitates new security measures.

  • 1 month ago Posted in

This week, Anthropic revealed a concerning development - hackers have weaponised its technology for a series of sophisticated cyber-attacks. With artificial intelligence (AI) now playing a critical role in coding, the time required to exploit cybersecurity vulnerabilities is diminishing at an alarming pace.

Kevin Curran, IEEE senior member and cybersecurity professor at Ulster University, highlights the methods attackers employ when using large language models (LLMs) to uncover flaws and expedite attacks. He emphasises the need for organisations to partner robust security practices with AI-specific policies amidst this changing landscape.

Curran explains, "This shows just how quickly AI is changing the threat landscape. It is already speeding up the process of turning proof-of-concepts – often shared for research or testing – into weaponised tools, shrinking the gap between disclosure and attack. An attacker could take a PoC exploit from GitHub, feed it into a large language model and quickly get suggestions on how to improve it, adapt it to avoid detection or customise it for a specific environment. That becomes particularly dangerous when the flaw is in widely used software, where PoCs are public but many systems are still unpatched."

“We’re already seeing hackers use LLMs to identify weaknesses and refine exploits by automating tasks like code completion, bug hunting or even generating malicious payloads designed for particular systems. They can describe malicious behaviour in plain language and receive working scripts in return. While this activity is monitored and blocked on many legitimate platforms, determined attackers can bypass safeguards, for example by running local models without restrictions.

Curran concludes, "The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks. At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats. That’s why security strategies can’t just rely on traditional controls. Organisations need AI-specific defences, clear policy frameworks and strong human oversight to avoid becoming dependent on the same technology that adversaries are learning to weaponise."

As AI continues to evolve, so too does its potential for misuse in cyber arenas. This calls for innovative solutions and strategic thinking to counteract its possible threats, ensuring digital realms remain secure.

Vertiv partners with PNY Technologies to simplify AI infrastructure deployment across EMEA. A...
Starburst introduces groundbreaking features to its data platform, promoting synchronous...
HackerOne unveils the evolution of Hai and launches AI-native code security, setting new standards...
A critical gap in governance is hindering enterprises' ability to leverage AI effectively, leaving...
A new report by Thales highlights mounting cybersecurity challenges faced by critical...
AI and data centres demand efficiency as the Smart Energy Coalition launches global initiatives to...
Klarna partners with Google Cloud to enhance consumer experiences through AI-driven innovation and...
Cambridge Future Tech and Arup team up to tackle data centre bottlenecks, promising 16 innovative...