Interviews

Generative AI for enterprises

by Mark Rowe

Despite what some might think, no, bots probably won’t come looking for your job, but they may well attack your intellectual property, says Ravi Pather, VP EME, Ericom Security by Cradlepoint, a wireless WAN and zero trust security product company.

Generative AI (GenAI) has become a transformative technology for businesses in more ways than one. In 2024, its widespread adoption will undoubtedly have an impact on organisations across all industries, resulting in increased productivity and efficiency. However, GenAI can be a double-edged sword. Organisations need to tread carefully when assessing organisation security risk, especially when it comes to data protection.

Research by Harvard Business School in 2023 showed that implementing Generative AI could increase employee productivity by up to 40%, while introducing new data security challenges. One main concern: employees who use GenAI to perform work tasks on a daily basis may unintentionally expose sensitive data to the technology’s Large Language Models (LLMs). Today, in addition to the many other security issues that organisations need to be aware of, they need to protect themselves from the potential threats posed by this powerful and prolific tool.

Benefits in business

Generative AI stands out for its remarkable ability to generate content, automate software development, improve customer interactions through chatbots, and optimise support operations. According to Gartner, an overwhelming majority of companies have already begun to integrate this technology into their processes, demonstrating its transformative potential. Gartner predicts that within two years, more than 80% of companies will likely use APIs (application programming interfaces) and Generative AI models or deploy dedicated applications for their production environments, up from less than 5% last year.

Risks associated with Generative AI

However, the integration of Generative AI into business practices is not without significant concerns. The ease with which employees can access and use these tools increases the risk of accidentally exposing confidential information, a concern exacerbated by the ability of these systems to process vast amounts of data. In addition, the formation of these systems on data available online raises the legitimate questions around copyright and intellectual property. Biases presented in the data can also lead to questionable results, highlighting the need for an ethical and critical approach in the deployment of Generative AI.

Security solutions for Generative AI

The rapid increase in the use of these technologies in the enterprise underscores the urgency of developing security solutions that keep up with this growth. The implementation of data loss prevention solutions based on zero trust technology offers a promising approach, enabling the secure use of Generative AI.

Zero trust architecture, which airgaps use of GenAI apps in secure, isolated cloud containers, provides true protection. Organizations can easily implement this clientless solution to set access policies for users rather than preventing use of GenAI sites. For example, organizations can block users from entering personally identifiable information (PII) or prevent them from using the copy/paste function, which risks sensitive corporate data flowing into LLMs and shared outside the organization. In addition, GenAI isolation also protects users’ devices and corporate networks from any malware generated by a GenAI tool or transmitted from a malicious source.

Generative AI represents a unique opportunity for business innovation, but it also requires special attention to the security risks it can generate. By adopting advanced security strategies, organisations can harness the full potential of this technology while ensuring the protection of their most valuable assets. For Australian organisations, the balance between innovation and security will become the key to successfully navigating the digital future.

Related News

  • Interviews

    Plan for London

    by Mark Rowe

    London Mayor Boris Johnson’s Police and Crime Plan for London proposes to shut near half of the capital’s police ‘front counters’, as…

  • Interviews

    Secure by default?

    by Mark Rowe

    Research released by Dods suggests that despite awareness in the public sector about cyber-security risks, government officials feel that not enough attention…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing