TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Path to tackling Shadow AI

by Mark Rowe

“Shadow AI” problem as a major business risk, challenging cyber attacks as an equally dangerous threat to data loss, according to a tech consultancy.

During Cybersecurity Awareness Month this October, recent Gartner analysis warns that the most significant data loss risk may not just come from external hackers, but from unregulated AI tools already inside business networks. Named “Shadow AI” – the unsanctioned use of external AI by employees – this has become one of the top emerging risks, it’s claimed. While AI has presented major potential in both cyber attacks and defence alike, the hidden tools already within organisations may be crippling their data security efforts further. This isn’t a distant scenario, as teams are using tools to boost productivity, regardless of IT governance policy, suggests Leading Resolutions.

“Nefarious cyber actors don’t even need to steal sensitive data when your employees are unintentionally giving it away,” says Jon Bance, chief operating officer at Leading Resolutions. “Employees trying to boost their own productivity are inadvertently exposing sensitive corporate information via publicly accessible tools.

“It’s time to implement AI policy and address workforce training, not just to effectively capitalise on new technologies, but also to mitigate risks currently being introduced to their network.”

Jon adds that the thinking behind this avenue of data leaks is far intentional. “Employee AI use isn’t driven by malicious intent. In the current SME climate, businesses are under increased pressure for faster, greater delivery. The absence of clear policies or approved tools within your technology stacks means individuals will naturally seek out the most effective support to get the job done themselves, unintentionally leading your organisation straight to critical data exposure.”

“This can be everything from developers downloading open-source models from unverified repositories to employees pasting sensitive client information into public generative AI tools. Everyone is aware of the existence of generative AI assets, but not necessarily their inherent risks. Additionally, third-party vendors are already, quietly, integrating AI-boosted features into software your teams may already be utilising, without formal notification.”

He argues that the path to tackling Shadow AI is through a cultural, strategic shift led by the C-suite. He says: “A balanced, strategic approach must come directly from the C-suite, as it is impossible to manage what you haven’t defined a clear framework to follow.. Providing a safer, more secure alternative is the most effective way to combat Shadow AI. Don’t just say ‘no’, but provide the path of ‘yes, securely’.”

“You cannot protect against what you can’t see, so boosting your security monitoring toolkit with Data Loss Prevention (DLP), Cloud Access Security Brokers (CASB) and integrated SIEM alerts and escalation processes are just some of the ways to get on the path to maximum security.

“The first step for any organisation is to conduct a professional readiness assessment to identify gaps across technology, policy and monitoring capabilities. Prioritising AI use cases that can deliver tangible value without compromising control is key. This allows businesses to build a resilient, tailored AI roadmap, balancing innovation with necessary governance and security.”

Related News