TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

AI in the SOC

by Mark Rowe

Dan Petrillo, VP of Product at the cyber firm BlueVoyant, discusses why complete autonomy is the wrong goal.

As artificial intelligence (AI) becomes more deeply embedded in security operations, a divide has emerged in how its role is defined. Some argue the security operations centre (SOC) should be fully autonomous, with AI replacing human analysts. Others believe that augmentation is the right path, using AI to support and extend existing teams.

Augmentation probably reflects how SOCs operate in practice. It helps analysts triage alerts, investigate incidents faster, and it brings better context into their work, while still ensuring humans are accountable for decisions.

Complete autonomy assumes a level of reliable, end-to-end decision-making that can operate without continuous human oversight. That’s a high bar. In real SOC environments, the technology, data quality, and operational constraints rarely support that assumption. Detection pipelines are noisy, context is fragmented across tools, and threat signals often require human judgment to interpret correctly. Even the most advanced automation struggles with edge cases, ambiguous alerts, and the dynamic nature of attacker behaviour.

Why autonomous SOC falls short

Delving deeper and examining why AI cannot fully replace SOC analysts; in short, it comes down to the oversimplification of the complexities inherent in what security operations involve. Investigation is only one part of a functioning SOC. Organisations also depend on experienced practitioners to interpret ambiguous signals, manage escalation, and communicate risk to senior leadership. When incidents become business issues, that same expertise is required to apply judgement, coordinate stakeholders, and produce reporting that stands up to scrutiny.

When something goes wrong, such as a logging failure, a broken parser following a third-party firewall update, or months of missing telemetry, automated systems cannot resolve the issue alone. Human expertise is needed to understand context, reconstruct events, and guide remediation.

Governance is another constraint. The cost of false negatives remains unacceptably high, and security leaders are unlikely to deploy solutions that act without clear oversight. Even where AI can execute parts of a workflow, organisations still require process controls, quality checks, and human validation for complex or unfamiliar scenarios. A fully autonomous model cannot reliably make the right judgement call in every situation, particularly when decisions carry real business impact.

Accuracy risks also remain. AI systems can make mistakes, draw incorrect conclusions, or miss important signals if left unchecked. Human oversight therefore remains essential to spot errors early and prevent them from turning into operational problems. Ultimately, fully autonomous SOC models ask organisations to trade human judgement and accountability for AI that is still maturing. That trade-off is impractical in an environment where consequences are measured in real-world disruption.

Why AI in the SOC Is Still Essential

However, none of the above suggests that AI does not have a place in the SOC. When implemented with purpose it delivers measurable improvements in the areas where teams are under the most pressure.

AI can take on repetitive, high-volume tasks such as alert triage and enrichment, allowing analysts to focus on more complex investigations, decision-making, and response. Deployed effectively, AI in the SOC is essential to reclaiming human time from low value activity, enabling teams to apply expertise where it has the greatest operational payoff.  Some of the most significant benefits of integrating AI agents into human-led SOC teams include:

  • Workload reduction: AI can handle repetitive, high-volume tasks such as alert triage, dynamic enrichment, and report generation, reducing analyst fatigue and operational backlog.
  • Process consistency: AI helps standardise workflows across varying skill levels, smoothing differences in tool syntax and operating procedures so teams perform more consistently.
  • Improved alert quality: By incorporating external threat intelligence, control telemetry, and asset context, AI can reduce false positives and support more accurate prioritisation.
  • Faster decision-making: Attack timelines, path mapping, and context-rich summaries enable analysts to assess scope, impact, and containment options more quickly.
  • Knowledge retention: AI working alongside human analysts captures operational insights over time, mitigating the impact of staff churn and preserving institutional knowledge. It can also identify patterns that may be missed by individuals and recommend rules or remediations accordingly.
  • Always on: AI doesn’t need breaks, get tired, fall ill, take holidays, or turn up late. It becomes a consistently reliable coworker for stretched teams working under pressure.

Where augmentation delivers  

AI delivers the greatest value when applied to SOC activities that are slow, manual, or prone to inconsistency, while keeping humans accountable for decisions and execution.  Augmentation should be introduced first in areas where AI can speed up analysis, surface insight, and support judgement, without removing human oversight. Below are a few areas where you might consider using AI to augment your team:

  • Alert triage: False-positive reduction, dynamic enrichment, and contextual prioritisation using threat intelligence, asset criticality, and exposure data.
  • Augmented investigations: Natural language querying, attack path and timeline visualisation, and suggested queries that speed root-cause analysis.
  • Incident and case summarisation: Automated executive- and GRC-ready reporting that consolidates findings with clear, decision-ready context.
  • Hypothesis generation: Continuous pattern and behaviour analysis to surface new detections, investigative approaches, and remediation opportunities for human approval.
  • Operational oversight: AI that learns expected procedures and flags process deviations, bottlenecks, or underperformance for leadership attention.
  • Response recommendations: Context-aware guidance and playbook generation, with optional integration-driven execution remaining under human control.

For security teams

Security teams manage millions of investigations every year, even after automating many routine cases. While automation can streamline these routine tasks, full autonomy remains unrealistic. The most critical stages of an investigation still rely on human judgement, context and accountability.

AI will continue to enhance the speed, scale and consistency of security operations, but the SOC of the future will remain human led, with AI augmenting, not replacing, analysts. Organisations that adopt AI in targeted, outcome driven ways will scale more effectively, reduce risk and preserve institutional knowledge. As threats evolve, AI augmented SOC teams will not only keep pace but stay ahead of adversaries.

Related News

  • Cyber

    Round-up on ransomware

    by Mark Rowe

    The second quarter of 2025 saw a drop of 6pc in the number of victims listed on ransomware Data Leak Sites, compared…

  • Cyber

    Future-ready cyber defence

    by Mark Rowe

    Haroon Malik FCIIS, Global Director – OT Security, at the advisory services firm Atos, and Cameron Prescott-Young, Director of Cyber Consulting Services,…

  • Cyber

    NIS2 compliance

    by Mark Rowe

    NIS2 impacts UK companies providing services to the EU; and the Cyber Security and Resilience Bill, introduced in the King’s Speech, mean…