TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Mitigating the business risk of Shadow AI

by Mark Rowe

Artificial intelligence (AI) tools are spreading rapidly through workplaces and transforming the way everyday tasks are carried out. Marketing teams are using GenAI to shape campaigns while software engineers are testing the limits of code generators. Bit by bit, AI is weaving itself into every part of business life. But problems arise when much of this is happening beneath the surface without oversight or governance, writes Art Gilliland, CEO at Delinea.

 

That’s where shadow AI – a growing blind spot for security teams – enters the fold. Unmanaged and unauthorised use of AI tools is rising quickly, and it will continue to do so unless organisations take a fresh look at their approach to AI policy.Research from Delinea shows that most large companies have already introduced some form of AI policy, with 91 per cent of organisations employing 50 or more people reporting a policy in place. But drafting a policy is the easy part. Building one that’s practical, enforceable and aligned with both business goals and the pace of innovation is far more complex.

For CIOs, the solution isn’t to outlaw AI altogether. Instead, the focus should be on creating adaptable guardrails that enable experimentation while managing risk. The urgency is clear: 93pc of organisations have already faced at least one instance of unauthorised shadow AI use and over a third have seen multiple cases. These figures underline a worrying truth. Formal policies are in place, yet employees continue to use AI tools in ways that slip through the cracks of corporate oversight.  How can organisations address this challenge?

 

Set up governance and guardrail frameworks

To get ahead of AI risks, organisations need AI policies that encourage AI usage within reason – and in line with their risk appetite. However, they can’t do that with outdated governance models and tools that aren’t purpose-built to detect and monitor AI usage across their business.

 

Choose the framework that’s right for you

There are already a number of frameworks and resources – including guidance from the Department for Science, Innovation and Technology (DSIT), the AI Playbook for Government, the Information Commissioner’s Office (ICO), and the AI Standards Hub (led by BSI, NPL and The Alan Turing Institute). These resources and frameworks can help organisations building a responsible and robust framework for AI adoption and complement international standards from bodies such as The Internet Society (ISO/IEC) and the Organisation for Economic Co-Operation and Development (OECD).Invest in tools that increase visibility

As a business establishes the roadmap for AI risk management, it’s crucial that the security leadership team starts assessing what AI usage really looks like in their organisation – this means investing in visibility tools that can look at access and behavioural patterns to find generative AI usage in every nook and cranny of the organisation.

 

Form an AI council

With that information in hand, the CISO should consider establishing an AI council made up of stakeholders from across the organisation – including IT, security, legal and the C-suite – to talk about the risks, the compliance issues and the benefits arising from both unauthorised and authorised tools that are already starting to permeate their business environments. This council can start to mould policies that meet business needs in a risk-managed way.

For example, the council may notice a shadow AI tool that has taken off that may not be safe, but for which a safer alternative does exist. A policy may be established to explicitly ban the unsafe tool but suggest use of the other one. Often these policies will need to be paired with investment in not only security controls, but also those alternative AI tools. The council can also help create a method for employees to submit new AI tooling for vetting and approval as advancements come to the market.

By creating this direct, transparent line of communication, employees can feel reassured that they are adhering to company AI policies and empowered to ask questions, while also encouraged to explore new tools and methods that could support growth down the line.

 

Prioritise proactive AI policy training

Engaging and training employees will play a crucial role in getting organisational buy-in to keep shadow AI at bay. With better policies in place, employees will need guidance on the nuances of responsible use of AI, why certain policies are in place and data handling risks. This training can help them become active partners in innovating safely. In some sectors, the use of AI in the workplace has often been a taboo topic. Clearly outlining best practice for responsible AI usage and the rationale behind an organisation’s policies and processes can eliminate uncertainty and mitigate risk.

 

Safeguarding the future of AI

Shadow AI isn’t disappearing any time soon. As GenAI tools become further embedded in day-to-day work, the scale of the challenge will only grow. Leaders now face a choice: treat shadow AI as a disruptor or seize it as a catalyst to redefine governance. The organisations that succeed will be those that welcome innovation within clear boundaries, ensuring that AI remains both safe and transformative along the way.

Related News