AI and the rise of the autonomous SOC

By Martin Jakobsen, Managing Director, Cybanetix.

  • 5 hours ago Posted in

AI has transformed the Security Operations Centre (SOC) from being rules-based to AI-assisted and with it the role of the SOC analyst. The technology is now used for a variety of tasks, from analysing and enriching alerts to contextual explanations for cases and automated containment and remediation. As a result, it’s well on the way to banishing the problem of alert fatigue, according to a recent report, while almost 70% of SOC analysts who use AI on a daily basis state it has enhanced the accuracy of their investigations.

However, more disruption is on the horizon in the form of agentic AI. It will see the technology become autonomous for the first time, without the need for human prompting, and that could pave the way for the SOC to become much more efficient. AI will not just augment analysts but directly assisting with key tasks such as threat detection, investigation, response and remediation. This in turn will drive down response times and minimising the impact of attacks but getting to this point is likely to be disruptive.

The pain before the gain

AI will rapidly increase the speed at which the level one SOC analyst operates. They’ll be running faster with the AI helping to sift through higher alert volumes and flag those that might require escalation. It will also help to upskill these analysts, helping to allay the current skills shortages which are driven by lack of experience, not personnel.

However, bringing about those changes will require the transformation of the SOC itself. Teams will need to grapple with a number of challenges ranging from how swiftly they can implement AI, to where it can be used to optimise SOC operations, and how many additional system integrations are needed to achieve these outcomes, as well as the guardrails that need to put up around the technology.

Yet there’s no denying that autonomous SOC is necessary. Security must keep pace with evolving threats, many of which are AI-driven, as illustrated by recent surveys. Over the course of the past year, one in four CISOs said their business suffered an AI-generated attack while other reports put the figure closer to 90% (the truth of the matter is it’s difficult to tell because these attacks are so well-crafted).

Countering these attacks will require the SOC to fight fire with fire, using the speed and agility of AI to detect and respond. For example, Agentic AI could be used to detect, investigate and remediate such as by rolling back systems to before a malware infection, revoking compromised credentials and updating firewall rules.

Phasing in Agentic AI

What this all points to is a need to adopt the technology at a measured pace so that the SOC can transition, in stages, from being automated today to become partially-assisted and then fully assisted by AI.

Partially-assisted SOCs will see Large Language Models (LLMs) used to analyse detections and alerts in order to identity new attacks and to then devise detection logic to proactively address them. It will see agentic AI used to tackle specific use cases such as investigation and lower risk response actions and to suggest remediation strategies for high-risk situations. However, human analysts will still handle the bulk of the workload, make important decisions and determine how investigations are resolved.

Partial assistance will be superseded by full assistance will see AI take on a role comparable to that of a human assistant. This will enable the SOC to dynamically build and run playbooks on the fly, determine malicious or suspicious verdicts for investigations and to remediate and contain threats in concert with the analysts. Humans will also help to guide the system and enable improvements to the AI so that it continues to adapt to emerging threats.

Taking a phased approach to implementing the autonomous SOC is the logical solution because it allows the team to adapt to the technology and buys the time needed for the technology to prove itself. Agentic AI is still in its infancy and there’s plenty of AI washing among security vendors so SOC teams need to be able to verify that outcomes are realistic. Our own explorations have revealed variances in how the same alert is investigated, for example, and attempts to tune or adjust the technology have proven limited, which is why it makes sense to look critically at alleged productivity gains.

Cost considerations

There’s also the matter of cost to consider. Processing via AI remains prohibitively high, curtailing experimental use, and licensing terms may make it unrealistic to use the technology extensively across SOC functions. In some cases, it’s going to make much more sense to use the automation technologies we already have today. For example, it’s at least twice as costly to use AI for threat detection and incident response compared to managed detection and response (MDR) that utilises a 24x7SOC with SOAR and detection engineering capabilities. Plus, because that cost ramps up in line with alert volumes, it could be prohibitively expensive to scale as alerts ramp up. However sighting Moore’s Law, AI will need to be strategically applied to deliver optimum results in the short term, but our expectation is that there will be an intersection between reducing cost and maturing AI will bring a new era of security.

It's perhaps for these reasons that Gartner recently positioned AI SOC Agents on the cusp of entering the peak of inflated expectations in its Hype Cycle for Security Operations 2025. This point on the adoption curve is where product usage increases

but there’s still more hype than proof that the innovation can deliver what is needed. We can expect the autonomous SOC to ride the same wave, slipping into the trough of disillusionment before becoming an accepted and established mode of defence.

However, most organisations will have neither the appetite nor the deep pockets required to experiment with the technology as it travels along that trajectory. But neither will they want to limp along with a rules-based SOC that leaves them exposed to AI attacks. So, it’s fair to assume the disruption of agentic AI will spur demand for managed security services, which can effectively shield these businesses from the risk and uncertainty involved. Managed Security Service Providers (MSSPs) are already experienced at assessing the performance of security solutions to optimise the SOC, making them ideally placed to act as the proving ground for Agentic AI. But when choosing an MSSP, organisations should also do their due diligence by asking for a roadmap that details both how the provider plans to adopt the technology.

By Masha Sedova, Vice President of Product Management, Human Risk, Mimecast.
As more organisations become reliant on cloud-based network services, the traditional...
By Sam Kirkman, Director of Services for EMEA at NetSPI.
By Kirsty Paine, Field CTO & Strategic Advisor, Splunk.
Toni de la Fuente, CEO and Founder of Prowler, unpacks how AI is moving from a supplementary tool...
By Manuel Sanchez, Information Security and Compliance Specialist, iManage.