It is no surprise that the unparalleled rise of ChatGPT and other generative AI apps quickly drew the attention of malicious actors. The platform acquired a million users in five days and surpassed 100 million users in just two months, playing right into the hands of cyber attackers that take advantage of hype around new popular services for nefarious purposes, writes Ray Canzanese, Director of Threat Labs at the cloud security product company Netskope.
While ChatGPT is undoubtedly the most popular generative AI tool – with more than 8x as many daily active users than any other AI app – Google Bard is currently growing fastest, adding users at a rate of 7.1 per cent per week. On its current trajectory, Bard is on course to catch up with ChatGPT in just over a year.
As such, the true impact of AI use in businesses is still yet to be determined. Netskope’s recent Cloud and Threat Report, which analysed habits of millions of users across thousands of enterprises, found that the number of users accessing AI applications increased by 22.5 per cent from May to June this year – and at the current rate of growth, is set to double within the next seven months.
To block or not to block
Many organisations have frantically reacted by putting controls in place to block the use of ChatGPT entirely. This is the most draconian type of policy – users are not allowed any interaction with the app: they cannot login, post prompts, or sometimes even visit the login page itself. Financial services leads the pack for this method, with nearly 17pc of businesses outright blocking ChatGPT.
However, a total ban rarely effectively eradicates – or even significantly reduces – the use of AI apps in businesses. Employees end up turning to shadow AI – secretly using AI on the job – to continue reaping the benefits provided by these tools. Shadow AI means IT teams have even less visibility over what information is shared onto these platforms, increasing the risk of sensitive data leakage and other cyber threats.
Instead of immediately implementing policies that block ChatGPT, Bard or other AI tools, organisations should encourage the safe adoption of AI apps through empowering employees to use preferred AI tools while safeguarding the organisation from risks.
Empowering the user
There are a few methods companies can adopt that enable workers to experience the benefits of AI tools, while also protecting sensitive data of both the employees themselves, and of the wider organisation:
Alert policies: Alert policies are informational controls to provide visibility into how users in the organisation are interacting with AI apps likeChatGPT. They are often used during a learning phase to explore the effectiveness and impact of a blocking control, and are commonly converted to block policies – only after they have been tuned and tested.
User coaching policies: User coaching policies provide context to the users who trigger them, empowering the user to decide whether or not they wish to continue. Here, the user is typically reminded of company policy. For example, if the company policy is not to upload proprietary source code to ChatGPT, but the user is uploading some open source code, they may opt to continue with the submission after the notification. For ChatGPT user coaching policies, users click proceed 57% of the time.
Data loss prevention (DLP) policies: DLP policies enable organisations to allow access to ChatGPT, but control the posting of sensitive data in prompts. They are often coupled with user coaching policies so that users can be notified that the data they are posted to ChatGPT appears to be sensitive in nature, and with alert policies to provide the organisation with visibility into potentially sensitive data being posted. DLP policies are configured by the organisation to trigger on every post to ChatGPT and inspect the content of the post before allowing it through.
Protecting sensitive information
Data loss prevention policies are necessary because sensitive data is uploaded to generative AI apps multiple times per day. Netskope research found that source code accounts for the largest share of sensitive data being exposed to ChatGPT, at a rate of 158 incidents per 10,000 enterprise users a month.
For every 10,000 enterprise users, there are also typically 18 incidents of sharing of regulated data (encompassing financial data, healthcare information, and personally identifiable information) on a monthly basis and 4 incidents per month of sharing intellectual property (excluding source code). There are also approximately four incidents per month of passwords and keys being shared – usually embedded in source code, serving as a crucial reminder to software engineers about the risks of hard-coding secrets into source code. The temptation will remain, given ChatGPT’s ability to review and explain code, pinpoint bugs and identify vulnerabilities – but comes with its own risks, as seen when Twitter’s source code was leaked this year onto GitHub, earlier this year. It was such a risk that Twitter sought legal action against GitHub to remove the code and force them to reveal the identity of the leak. It is no surprise that data loss protection policies focus on proprietary source code, passwords and keys, intellectual property, and regulated data.
Riding the hype
Netskope’s Cloud Threat Report also found various ChatGPT scams involving phishing campaigns, malware distribution initiatives, and a rise in spam and deceitful websites – discovering that over 1,000 harmful URLs and domains exploit ChatGPT’s reputation. Adversaries are also adopting LLMs in their own malicious endeavours, with WormGPT and FraudGPT available on the dark web, being used to help author malware and phishing emails. These exploitation methods will only continue to grow in popularity and sophistication, and while the race is on to implement AI regulation, it has yet to catch up with the technology.
Enterprises face glaring challenges when defending against the new cyber risks that generative AI tools bring along with the benefits. Organisations need to focus on implementing a robust approach to security – not just blocking tools and hoping the problem goes away, but by truly empowering users to navigate the hype and become aware of how they can better protect themselves in this new era.
About Ray Canzanese
Ray is the Director of Netskope Threat Labs, which specialises in cloud-focused threat research. His background is in software anti-tamper, malware detection and classification, cloud security, sequential detection, and machine learning. Visit www.netskope.com.





