Every year, the cybersecurity industry rolls out predictions of increasingly “sophisticated” threats. But during my 20 years in the industry, I’ve learned that attackers rarely need sophistication, they just need us to keep making the same mistakes while technology accelerates around us, says Beth Miller, Field CISO, Mimecast.
This year, those mistakes are exponentially more costly. The convergence of global sporting events, AI infrastructure concentration, and workforce burnout creates perfect conditions for attacks that exploit human behaviour and organisational blind spots rather than technical vulnerabilities. Here’s what’s actually changing, and what you can do about it before the damage is done.
New financial sector
The AI gold rush is in full swing. Companies building the digital picks and shovels โ from chip providers to hyperscale datacentres โ are now crown jewels for cyber adversaries. Proprietary AI models, training data and compute-heavy systems concentrate sensitive IP and privileged access in one place, making them high-value targets.
The human attack surface is expanding just as fast. Employees at AI infrastructure companies are exposed to LinkedIn impersonation schemes from fake recruiters, impersonators posing as venture capitalists, or colleagues requesting urgent access. These arenโt random and uncoordinated phishing attempts. Attackers can research org charts, mimic communication styles, and exploit the high-trust cultures that defines AI companies.
Supply chain complexity dramatically amplifies these risks. A single compromised vendor can facilitate espionage, IP theft, or ransomware. Nation-state actors, including China, are likely to infiltrate these organisations, while e-crime groups will look to exploit even brief infrastructure outages for premium ransoms.
Strengthening AI infrastructure security goes beyond basic controls. Organisations need to raise the bar on third-party monitoring, tighten employee verification and establish clear AI governance. By applying zero-trust principles and running regular attack simulations, teams build resilience, ensuring theyโre prepared to respond effectively without sacrificing agility.
Cybercriminals Go for Goldย ย
When millions of fans across continents attempt to buy tickets, stream events, and engage with official apps during the 2026 Winter Olympics and FIFA World Cup, they’ll create an attack surface that’s genuinely unprecedented in scale and complexity.
Cybercriminals and nation-state actors are preparing sophisticated, AI-powered phishing campaigns targeting fans, volunteers and staff. These attacks exploit trust and excitement around major events, delivering messages nearly indistinguishable from legitimate communications.
The threats wonโt stop at phishing. Ransomware could disrupt ticketing and broadcasts, while DDoS attacks interrupt live streams and event apps at critical moments. Deepfakes may be used to manipulate public perception, as e-crime groups and nation-state actors โ particularly Russia and China โ exploit these events for influence operations and large-scale data collection targeting fans, teams and sponsors.
Protecting these events requires layered, proactive security, including event-specific social engineering defences, stronger identity controls and AI-driven threat intelligence. Organisations must monitor for impersonation, fraudulent domains and fake ticketing sites, while ensuring backup and recovery plans are tested to withstand live attacks. Anticipating threats across people, technology and operations is essential to protecting fan trust and business continuity.
Managing Alert Overloadย ย
Security operations centres (SOCs) remain buried under relentless alert volumes โ most of them noise. Analysts spend hours triaging false positives, only to see queues refill faster than they can clear them. The result is burnout, missed signals, and a backlog that never disappears. In 2026, AI will be structural in security operations.
AI-powered systems can pull in context, correlate data, group related alerts, and resolve routine incidents autonomously.ย They learn from each alert, continuously adjusting to emerging threats. What once required days can now be resolved in minutes, freeing teams for higher-value work.
As AI absorbs labour-intensive triage, human analysts shift from constant reaction to orchestration and strategic risk management. Success depends on pairing automation with oversight, delivering real-time risk feedback, and auditing for blind spots. Training analysts as AI orchestrators ensures that automation enhances resilience rather than replacing human judgment, ultimately turning alert fatigue into a strategic advantage and accelerating response times without compromising control.
Email Remains Ground Zeroย
Phishing is evolving, not fading. Email remains the primary entry point for cyberattacks, driving up to 90 per cent of breaches as AI makes lures more personalised and convincing. Incidents have climbed from 60 per cent to 77pc over the past year, as collaboration tools push more work into email. Attackers are shifting to highly targeted strikes, impersonating executives and employees and layering in deepfake audio and video to increase pressure and urgency.
Defence requires an adaptive security posture: AI-driven filtering must detect subtle, contextual threats, while continuous, scenario-based training keeps employees alert to evolving techniques. High-value targets such as executives and finance teams require deeper safeguards, while the broader workforce should have fast, frictionless ways to flag suspicious messages. Incident simulations, collaboration tool monitoring and prompt escalation protocols collectively create a culture of vigilance, ensuring that one email does not become the gateway to a breach.
Shadow AI, the New Shadow IT
The human attack surface is also expanding. As organisations cut headcount and raise productivity, expectations are stretched to the breaking point. Insider risk is increasingly becoming a reflection of stress and mental fatigue. Such intense pressures create conditions in which errors, mishandled data, and sometimes deliberate exfiltration can compromise security.
Employees are turning to the use of unsanctioned ‘shadow AI’ tools – pasting proprietary data into external systems, training personal models on company information, or forwarding work emails to personal Gmail accounts to cope with workload. This latter practice is particularly dangerous, as personal email accounts may have AI features enabled that scan and analyse content, potentially exposing confidential business information to third-party AI training or analysis. By mid-2026, many enterprises may face ten times as many rogue AI agents or orphaned bots as unauthorised cloud apps, each a potential insider threat.
Attackers are also courting insiders and probing outsourced operations in regions with weaker controls. Managing this convergence of human and AI risk requires treating people, AI agents, and access decisions as a connected risk surface.ย Organisations must monitor behaviour intelligently, govern AI tools as carefully as employees, and recognise that workforce well-being is inseparable from security.
Defences in a Connected Threat Landscape
The challenge this year isn’t new attack techniques โ itโs that long-standing gaps are widening simultaneously. Concentrated AI infrastructure, persistent email vulnerabilities, unmanageable alert volumes, and shadow AI proliferation are expanding the attack surface faster than most organisations can track.
The organisations that will thrive are the ones ready to embrace the human-AI partnership, protect their employees, govern AI agents, automate routine workloads, and building agile programmes that can respond in real-time. The challenges are real, but theyโre not insurmountable if organisations stop chasing sophistication and start addressing reality.




