Alex Laurie, Senior Vice President, at the vendor Ping Identity discusses the risks attached to unsupervised agents.
The rise of agentic AI signals a shift and a new reality. Working behind the scenes so complex tasks are completed efficiently, AI agents are undoubtedly creating excitement across industries for their ability to automate decision making; streamline tasks like forecasting, data analysis and threat detection; and enhance productivity at scale. However, for IT decision makers, the challenge is clear: how does agentic AI fit within their existing cybersecurity framework without introducing new risks?
Many organisations have already adopted large language and generative AI models. Employees may now be more comfortable with the tools and leadership now know the risks associated with AI, but the deployment of agentic AI without a well-defined governance model can have serious implications for an organisation’s security posture, especially where identity systems are concerned.
These autonomous agents have operational power but also run the risk of unintentionally bypassing key security controls or creating blind spots in monitoring environments. Because AI agents can trigger workflows, request data and interface with multiple applications simultaneously, their access must be tightly managed. To prevent misuse, IT leaders must treat agentic AI as a core element of their cybersecurity strategies, not just a useful bolt-on. All AI use must be governed by clear access protocols, continuous oversight and integrated risk monitoring. The cost of not doing so is too high.
Identity systems are stretched
AI agents behave more like software models than human users, yet they’re often expected to operate within identity systems designed for people. This mismatch is pushing traditional Identity and Access Management (IAM) tools to their limits. IAM solutions are built to prevent unauthorised access to sensitive data, but they struggle to distinguish between legitimate AI agents and malicious software posing as one. Without IAM systems being trained to do this, organisations face two major risks: either AI agents are blocked entirely and operations slow (or potentially halted), or they’re not protected, opening new threat vectors for adversaries.
The threat of malicious agents
Attackers are already exploiting this gap. Malicious AI agents are being developed to cause disruption through automated phishing campaigns, deepfakes and scanning for vulnerabilities – often doing this by impersonating legitimate users. Some agents are also capable of changing their behaviour in a bid to avoid detection, making them harder to identify with traditional monitoring techniques. An adaptive IAM strategy is critical to countering this. It can automatically detect unusual behaviour, dynamically escalate authentication requirements and maintain trust in digital environments. This is especially crucial as almost 90 per cent of consumers fear AI-driven attacks on their digital identity according to our 2024 consumer survey.
Four pillars for effective AI management
To securely integrate AI, IT teams must rethink their IAM strategies. Here are four key pillars to support that strategy:
1. AI user management: Just as with human users, agents require defined identities, access rights and behavioural monitoring, especially within critical infrastructure.
2. Adaptive policies: Static authentication isn’t enough. Adaptive access policies, which act and grant access contextually, stop agents getting blanket permissions.
3. Verification procedures: AI agents don’t follow traditional MFA protocols. Involve humans when granting temporary permissions and monitor high-risk actions.
4. Real-time monitoring: Continuous monitoring enables the quick detection of abnormal behaviours and ensures agents remain within their authorised roles and legal parameters.
Capitalising on AI-driven workflows
To alleviate concerns about AI use and leverage the potential of AI, IT managers must address their IAM systems first and foremost. Doing so is crucial to ensuring AI agents become an integral part of the wider security architecture. When properly managed, these agents can reduce operational bottlenecks, improve decision making and even assist in threat detection and response. Only then can organisations fully utilise the potential of AI without compromising security, compliance or control. Agentic AI is here to stay, but it must be introduced responsibly, with security at its core.





