TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
IT Security

Employee trust is key to AI adoption 

by Mark Rowe

The adoption of AI in enterprises is accelerating steeply, and it’s hitting a critical inflection point, says Sunil Agrawal, Chief Information Security Officer at the platform Glean.

The potential value that AI can provide enterprises has been well-proven, and organisations across all industries are working to implement it into every workflow imaginable. However, as AI evolves into more autonomous workhorses, it’s clear that current security approaches, which are heavily reliant on acceptable use policies and manual human oversight, are neither scalable nor sufficient to address the expanding risk landscape. This environment poses considerable risks to all enterprise data, as well as setting up users to lose trust in AI autonomy.

Significant advancements need to be made in terms of security strategies to accommodate the growing complexity and autonomy of AI systems. The rise of open-source tools has further complicated the security landscape. Users need to ensure that their SaaS providers are embedding agent security directly into their products. Furthermore, these security measures need to be made with employee trust in mind and ensure processes and results follow strict adherence to permissioning rules for all input data. This is vital for building “secure by design” AI solutions that minimise risk and encourage users to experiment with new tools without hesitation and uncertainty.

AI systems

AI models’ cooperative design creates inherent security vulnerabilities that attackers exploit through sophisticated prompt injection techniques. These systems, engineered to be helpful and accommodating, become susceptible when malicious actors disguise harmful requests as hypothetical scenarios, roleplay exercises, or urgent circumstances that compel the AI to bypass safety protocols. This fundamental tension between AI utility and security extends beyond technical fixes to the core philosophy of AI development.

The threat landscape further evolves as attackers develop new methods beyond early exploits like the DAN (Do Anything Now) prompt. Multi-model environments with inconsistent protection levels create additional vulnerabilities, with threats varying by application; customer service agents face social engineering attacks targeting confidential data, while code generation tools encounter malicious injections designed to compromise development environments, for example.

To add to the challenge, enterprise data environments are often chaotic, with outdated permission structures that haven’t kept pace with data growth. Manual classification of sensitive content cannot match AI deployment speeds, creating dangerous gaps between security theory and practice that burden enterprise customers with complex data preparation tasks requiring specialised expertise. Much like the security shortcomings that arose during the first years of SaaS sprawl, organisations need to get ahead of these concerns before progressing headfirst into deep AI integration.

Reliable agents

Firstly, establishing reliable AI agents requires a foundation of strict permission boundaries combined with a comprehensive organisational context. This enables systems to understand company operations while maintaining ironclad protection around sensitive information. The success of AI adoption in enterprise environments fundamentally depends on maintaining strict adherence to document permissioning rules, as workers must have absolute confidence that confidential information will remain protected when interacting with AI systems. Without this guarantee, users quickly lose trust in the platform, undermining both its practical usability and the broader adoption of potentially transformative AI technologies.

Properly implemented permissioning systems also ensure that generative AI can only access and use data it’s explicitly authorised to use, regardless of the application or user involved. This creates a secure foundation where AI assistants can operate with full permissions awareness while sourcing only information that individual users have legitimate access to. At the same time, data security is maintained, and the relevance and personalisation of AI outputs are enhanced.

Furthermore, in a multi-model world, a layered security approach with permissions-enforced data connectors and robust access controls is necessary to prevent prompt injection attempts from resulting in data leaks and unauthorized access. Strict permissioning enforcement also drastically reduces the impact of prompt injections should they happen, as the underlying system will only return data that the user is authorized to access, regardless of the prompt provided.

There also needs to be a fundamental shift in how AI companies approach enterprise partnerships, moving beyond simply providing tools to actively collaborating in data preparation and security implementation. Often, strict permission enforcement isn’t enough due to drift from user-generated content, or because companies themselves set permissions incorrectly at the base level. Automatic sensitive data detection capabilities provide a necessary safeguard that functions regardless of human error. However, in order to accomplish this, this detection capability must be coupled with enterprise context—it needs to understand who’s accessing shared data, whether they should be accessing said data according to role and responsibilities, and understand what content is inappropriately shared and needs to be surfaced.

This means developing automated systems for sensitive data detection, designing security measures tailored to specific use cases, and creating streamlined processes that help organizations make their data AI-ready. Only through this kind of proactive partnership can the AI industry build the trust necessary for widespread enterprise adoption, ensuring that organisations can harness the transformative power of AI while maintaining the security and confidentiality that their stakeholders demand. Since agents also go beyond LLM calls, integrating enterprise data and actions, partnerships must be proactive and look beyond models to wider system guardrails and implementation.

Foundation

Securing agents end-to-end starts with purpose-built data governance. Each agent’s scope must be well defined, ensuring that the data processed and actions taken do not go beyond that the user’s intended. Agents given increased scope need be to given explicit permissions by users before actions happen on their behalf. Furthermore, make sensitive actions deterministic with required humans in the loop for approval—it’s essential to go beyond just agent governance, ensuring that AI models aren’t acting with misaligned intent. Fortify platforms against prompt injection, and continuously monitor outcomes against intent.

This layered approach, combining both agent security and continuous improvement through evaluation of accuracy, completes, and instruction following, keeps agents adequately protected for everyday enterprise usage. Using both traces and looking at overall agent quality and security ensure reliability and delivers trustworthy, auditable automation without slowing teams down.

A compelling pattern emerges in AI implementations: strong security and guardrails prioritizing user trust typically correlate with better performance. Systems that minimise hallucinations maintain higher accuracy, while agents operating within defined boundaries and security rules, consistently outperform unrestricted ones. While building properly permissioned and secured AI systems is complex and costly, hitting the balance between successfully leveraging and integrating this enterprise data while protecting it will define successful AI transformation.