TESTIMONIALS

โ€œReceived the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.โ€

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

AI in the SOC

by Mark Rowe

Dan Petrillo, VP of Product at the cyber firm BlueVoyant, discusses why complete autonomy is the wrong goal.

As artificial intelligence (AI) becomes more deeply embedded in security operations, a divide hasย emergedย in how its role is defined. Some argue the security operations centre (SOC) should be fully autonomous, with AI replacing human analysts. Othersย believeย thatย augmentationย is the right path, using AI to support and extend existing teams.

Augmentationย probablyย reflectsย how SOCs operate in practice. It helps analysts triage alerts, investigate incidents faster, andย itย bringsย better context into their work, whileย stillย ensuring humansย areย accountable for decisions.

Completeย autonomy assumes a level of reliable, end-to-end decision-making that canย operateย without continuous human oversight.ย Thatโ€™sย a high bar. In real SOC environments, the technology, data quality, and operational constraints rarely support that assumption. Detection pipelines are noisy, context is fragmented across tools, and threat signals often require human judgment to interpret correctly. Even the most advanced automation struggles with edge cases, ambiguous alerts, and the dynamic nature of attacker behaviour.

Why autonomous SOC falls short

Delving deeper and examiningย whyย AI cannotย fully replace SOC analysts;ย in short,ย itย comes down toย theย oversimplification of the complexities inherent inย whatย security operations involve. Investigation is only one part of a functioning SOC. Organisations also depend on experienced practitioners to interpret ambiguous signals, manage escalation, and communicate risk to senior leadership. When incidents become business issues, that sameย expertiseย isย requiredย to apply judgement, coordinate stakeholders, and produce reporting that stands up to scrutiny.

When something goes wrong, such as a logging failure, a broken parser following a third-partyย firewallย update, or months of missing telemetry, automated systems cannot resolve the issue alone. Humanย expertiseย is needed to understand context, reconstruct events, and guide remediation.

Governance is another constraint. The cost of false negativesย remainsย unacceptably high, and security leaders are unlikely to deploy solutions that act without clear oversight. Even where AI can execute parts of a workflow, organisations still require process controls, quality checks, and human validation for complex or unfamiliar scenarios. A fully autonomous model cannot reliably make the rightย judgementย call in every situation, particularly when decisions carryย realย businessย impact.

Accuracy risks alsoย remain. AI systems can make mistakes, draw incorrect conclusions, or miss important signals if left unchecked. Human oversight thereforeย remainsย essential to spot errors early and prevent them from turning into operational problems.ย Ultimately, fullyย autonomous SOC models ask organisations to trade human judgement and accountability for AI that is still maturing. That trade-off is impractical in an environment where consequences are measured in real-world disruption.

Why AI in the SOC Is Still Essential

However, none of the aboveย suggests that AI does not have a place in the SOC.ย When implemented with purposeย it delivers measurable improvements in the areas where teams are under the most pressure.

AI can take on repetitive, high-volume tasks such as alert triage and enrichment, allowing analysts to focus on more complex investigations, decision-making, and response. Deployed effectively, AI in the SOC is essential to reclaiming human time from low value activity, enabling teams to apply expertise where it has the greatest operational payoff.ย  Some of the most significant benefits of integrating AI agents into human-led SOC teams include:

  • Workload reduction:ย AI can handle repetitive, high-volume tasks such as alert triage, dynamic enrichment, and report generation, reducing analyst fatigue and operational backlog.
  • Process consistency:ย AI helps standardise workflows across varying skill levels, smoothing differences in tool syntax and operating procedures so teams perform more consistently.
  • Improved alert quality:ย By incorporating external threat intelligence, control telemetry, and asset context, AI can reduce false positives and support moreย accurateย prioritisation.
  • Faster decision-making:ย Attack timelines, path mapping, and context-rich summaries enable analysts to assess scope, impact, and containment options more quickly.
  • Knowledge retention:ย AI working alongside human analysts captures operational insights over time, mitigating the impact of staff churn and preserving institutional knowledge. It can alsoย identifyย patterns that may be missed by individuals and recommend rules or remediations accordingly.
  • Alwaysย on:ย AIย doesnโ€™tย need breaks, get tired, fall ill, take holidays, or turn up late. It becomes a consistently reliable coworker forย stretchedย teamsย working under pressure.

Where augmentation deliversย ย 

AI delivers the greatest value when applied to SOC activities that are slow, manual, or prone to inconsistency, while keeping humans accountable for decisions and execution.ย  Augmentation should be introduced first in areas where AI canย speedย upย analysis, surface insight, and support judgement, without removing human oversight.ย Below are a few areas where you might consider using AI to augment your team:

  • Alert triage:ย False-positive reduction, dynamic enrichment, and contextual prioritisation using threat intelligence, asset criticality, and exposure data.
  • Augmented investigations:ย Natural language querying, attack path and timeline visualisation, and suggested queries that speed root-cause analysis.
  • Incident and case summarisation:ย Automated executive- and GRC-ready reporting thatย consolidatesย findings with clear, decision-ready context.
  • Hypothesis generation:ย Continuous pattern and behaviour analysis to surface new detections, investigative approaches, and remediation opportunities for human approval.
  • Operational oversight:ย AI that learns expected procedures and flags process deviations, bottlenecks, or underperformance for leadership attention.
  • Response recommendations:ย Context-aware guidance and playbook generation, with optional integration-driven executionย remainingย under human control.

For security teams

Security teams manage millions of investigations everyย year, evenย after automating many routine cases. While automation can streamlineย theseย routine tasks, full autonomyย remainsย unrealistic. The most critical stages of an investigation still rely on human judgement,ย contextย and accountability.

AI will continue to enhance the speed,ย scaleย and consistency of security operations, but the SOC of the future will remain human led, with AI augmenting,ย not replacing,ย analysts. Organisations that adopt AI in targeted,ย outcome drivenย ways will scale more effectively, reduceย riskย and preserve institutional knowledge. As threats evolve, AIย augmented SOC teams will not only keep pace but stay ahead of adversaries.

Related News

  • Cyber

    Belfast cyber partnership

    by Mark Rowe

    The US cyber firm Rapid7 has launched a security research partnership with the Centre for Secure Information Technologies (CSIT) at Queenโ€™s University…

  • Cyber

    Cyber as a trust issue

    by Mark Rowe

    Cybersecurity isnโ€™t a tech issue – itโ€™s a trust issue, say Jake Upfield, pictured, Head of Solutions Advisory at Cybit and Tim…