TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Case Studies

Risks of ‘Shadow AI’

by Mark Rowe

“Shadow AI” in the workplace is a growing risk, as employees turn to unapproved AI tools to meet deadlines and boost their productivity, a study suggests.

A survey of 2,000 respondents in the UK and US by Sapio Research on behalf of cyber firm BlackFog, Inc. in November 2025 found that most, 86 per cent now use AI tools at least weekly for work-related tasks. However, more than one-third (34%) admit to using free versions of company-approved AI tools, raising concerns about where sensitive corporate data is stored, processed, and accessed. As for respondents using AI tools not approved by their employer, 58pc rely on free versions, which often lack enterprise-grade security, data governance, and privacy protections.

Most, 63pc of respondents believe it is acceptable to use AI tools without IT oversight if no company-approved option is provided. The ‘speed outweighs security’ mindset is reinforced by the fact that 60pc of respondents agree that using unsanctioned AI tools is worth the security risks if it helps them work faster or meet deadlines. Also, 21pc believe their employer would “turn a blind eye” to the use of unapproved AI tools as long as work is completed on time.

Other findings:

  • Senior level leaders are more likely to accept risks: 69pc of respondents at President or C-level and 66pc of those at Director or Senior VP level believe speed trumps privacy or security. In contrast, 37pc in administrative roles and 38pc in junior executive positions share this view.
  • Sensitive corporate data is being shared on unsanctioned AI tools: One-third (33pc) of employees have shared research or data sets, more than a quarter (27pc) have shared employee data such as staff names, payroll, or performance information, and 23pc have shared financial statements or sales data.
  • Third-party integrations heighten risk: Around half (51pc) of employees admit to connecting or integrating AI tools with other work systems or apps without IT department approval or oversight.

Dr Darren Williams, CEO and Founder of BlackFog, said: “This research is a stark indication not only of how widely unapproved AI tools are being used, but also the level of risk tolerance amongst employees and senior leaders. This should raise red flags for security teams and highlights the need for greater oversight and visibility into these security blind spots. AI is already embedded in our working world, but this cannot come at the expense of the security and privacy of the datasets on which these AI models are trained.”

Visit blackfog.com.

Meanwhile according to the ThreatLabz 2026 AI Security Report by the cloud security vendor Zscaler, an enormous volume of activity is happening on “standalone AI” such as ChatGPT, which logged 115 billion transactions in 2025 and Codeium, which logged 42 billion transactions. “Embedded AI,” AI capabilities built directly into everyday enterprise SaaS applications and platforms, have become one of the fastest growing sources of unmanaged risk. Because these features are often active by default and escape detection by legacy security filters, they create a back door for sensitive corporate data to flow into AI models without oversight, the firm says.

Related News