A survey of more than 250 global security people, at RSA Conference 2024 in the United States and Infosecurity Europe 2024 in London in June, suggested that despite having a laissez-faire attitude towards Shadow SaaS, security people have taken a more cautious approach to GenAI (generative artificial intelligence) usage, according to the vendor Next DLP.
Most admitted to using Shadow SaaS – that is, cloud-based ‘software as a service’ applications that had not been provided by their company’s IT team – in the past year. This is despite the fact that they know the risks, such as data loss, lack of visibility and control, and data breaches when using unauthorised tools.
Half of the respondents – each asked ten questions – stated that AI use had been restricted to certain job functions and roles where they worked, while 16 per cent had banned the technology completely. Near half, 46pc of organizations have in place tools and policies to control employees’ use of GenAI.
Next DLP’s Chief Security Officer (CSO), Chris Denbigh-White said: “Security professionals are clearly concerned about the security implications of GenAI and are taking a cautious approach. However, the data protection risks associated with unsanctioned technology are not new. Awareness alone is insufficient without the necessary processes and tools. Organizations need full visibility into the tools employees use and how they use them. Only by understanding data usage can they implement effective policies and educate employees on the associated risks.”
Among the other findings: 40pc of security people asked do not think employees properly understand the data security risks associated with Shadow SaaS and AI. Yet, they are doing little to combat this risk. Only a minority, 37pc of security people had developed clear policies and consequences for using these tools, and even fewer (28pc) were promoting approved alternatives to combat usage. Only half had received guidance and updated policies on Shadow SaaS and AI in the past six months, with one in five admitting to never receiving this. Nearly one-fifth of security people were unaware of whether their company had updated policies or provided training on these risks, indicating a need for further awareness and education.
Denbigh-White added: “Clearly, there is a disparity between employee confidence in using these unauthorized tools and the organization’s ability to defend against the risks. Security teams should evaluate the extent of Shadow SaaS and AI usage, identify frequently used tools, and provide approved alternatives. This will limit potential risks and ensure confidence is deserved, not misplaced.”
Visit www.nextdlp.com.




