TESTIMONIALS

โ€œReceived the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.โ€

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Better prepare for a cyber breach

by Mark Rowe

Despite headline-grabbing breaches at global enterprises, such as Jaguar Land Rover and M&S making the daily news towards the end of last year, Mid-Market firms remain dangerously under-prepared for the domino effect of cyberattacks, says Jason Revill, Global Security Practice Technology Lead at Avanade.

Blue Yonder is one such example, where a breach in the supply chain software exposed vast amounts of data, including sensitive information. The outage led to a major UK disruption across regional supermarket chains, impacting fresh food deliveries, stock availability and produce flow. Peter Green Chilled, the UK food supplier, suffered a similar blow, resulting in thousands of cancelled orders and wasted produce, underscoring how operational fragility can ripple across the entire ecosystem. These examples highlight just how at-risk Mid-Market organisations are, making strengthened resilience in this sector essential.

In response, AI-powered security tools are emerging as a critical solution. Now commonly referred to as โ€˜AI for Securityโ€™, these capabilities help organisations detect, analyse and respond to threats much faster than traditional methods. But, as businesses adopt generative AI (Gen-AI) and agentic AI across various functions and departments, often through third party providers, they are now facing a second challenge, โ€˜Security for AIโ€™. With 77 per centย of organisations lacking the foundational data and AI security needed to protect models, data pipelines and cloud environments, there are four core AI risk areas they need to address:

  1. Data leakage from oversharing

Leadership teams need full visibility into how AI tools are deployed, where they interact with sensitive data and how they influence decisions, internally and across suppliers. Because so many Midโ€‘Market firms outsource parts of their cybersecurity, visibility often becomes fragmented, especially when partners use AI tools that the organisation doesnโ€™t have complete oversight of.

To overcome this, businesses can create an internal map of AI data flows, define clear โ€œnoโ€‘goโ€ categories of data that cannot be used in public AI tools, and enable automated discovery alerts that notify teams when data is shared or uploaded inappropriately. Establishing a named executive owner for AI data oversight will also ensure accountability.

  1. AI vulnerabilities and malicious use

Shadow AI, the unsanctioned or unmonitored use of AI tools, spreads quickly, especially in fastโ€‘moving teams trying to boost productivity. Even wellโ€‘intentioned use can create vulnerabilities.

A clear example of this came from the acting director of the US Cybersecurity and infrastructure Security agency, who uploaded sensitive government documents to ChatGPTโ€™s public chatbot, triggering security alerts.

Mid-market leaders must develop clear and simple acceptable business use cases and rules for AI. They must preโ€‘approve safe tools, provide template prompts and testing workflows, and similarly should require suppliers to adhere to their business monitoring tools and preferred use of AI, so as to avoid any abuse of the technology.

  1. Lack of AI access control and lifecycle management

Many firms have mature identity and access management controls for people, yet they rarely have the same rigour for AI systems. Agent identities, service accounts, connectors and orchestration tools can sprawl without ownership, creating silent pathways to sensitive data long after the pilot testing period end. A great solution to this is if business leaders were to treat AI assets like privileged tech.

Assign identities to agents and tools, apply leastโ€‘privilege roles, log all actions, and retire unused or expensed information in good time. Lifecycle management should span the model selection, dataset curation, deployment approvals, version control and retirement so that, after an incident, leaders can answer who (or which agent) accessed what, when, and under whose request or authority. These disciplines also help contain supplyโ€‘chain blast radius when thirdโ€‘party services integrate deeply into daily operations.

  1. Governance, risk and compliance (GRC)

Governance ties the previous three risks together. Without a simple, shared framework, rapid AI adoption fragments and becomes risky and fragmented, especially for Midโ€‘Market firms that are trying to scale at pace to capture productivity gains.

The appropriate GRC approach defines acceptable AI use, data boundaries, testing standards, incident response processes, vendor obligations and employee responsibilities. Crucially, it extends to the supply chain, where outsourcing can dilute visibility and increase the risk of misaligned security practices.

Businesses should seek to publish an AI acceptable use standard, or AI governance framework. In turn they should prioritise strengthening supplier contracts with rightโ€‘toโ€‘audit clauses and shared controlโ€‘evidence requirements, making it easier to alleviate potential risks, and track AI compliance in one central view and have the opportunity to review and refresh it, together, as regulations evolve.

Proactive AIย ย 

AI success hinges on how well people understand it, use it and challenge it. Training gives employees the foundation to use AIโ€‘powered security tools responsibly and to avoid risky behaviours when interacting with generalโ€‘purpose AI applications. But with the rise of Agentic AI, systems that can take action, reason across data sources and assist autonomously, skills alone are not enough. Employees need to understand how these tools work with them, not instead of them.

Agentic AI security tools, such as Microsoftโ€™s Security Copilot, can augment and empower security teams by detecting and responding at machine speed, correlating signals that humans would never have time to uncover manually. They automate repetitive investigations, summarise complex incidents instantly and guide analysts through recommended actions.

Preventionย 

Mid-Market organisations are naturally agile and can make decisions faster than large enterprises. While this agility is a strength, without structured governance, rapid adoption of general-purpose AI tools can introduce inconsistency and risk across systems and data,

In parallel, AI-powered cybersecurity capabilities give organisations earlier visibility of emerging threats, turning prevention into a strategic asset. The organisations that strengthen these frameworks now will be the ones in a much better decision to protect operations, customers and long-term value. The Mid-Market cannot wait for another breach before taking decisive action.

Related News

  • Cyber

    Ransomware round-up

    by Mark Rowe

    EMEA organisations paying ransoms dropped by a fifth or more (22 per cent) from the previous year, according to the vendor Veeam,…

  • Cyber

    Supply chain risk

    by Mark Rowe

    Businesses are deeply interconnected – and cyber risks in one part of the supply chain can have far-reaching effects, according to a…