As generative AI becomes deeply embedded in enterprise workflows, we face a new class of identity security challenges: the rise of AI agents as digital identities, says Theis Nilsson, vice president of global advisory practice, Omada, which offers Identity Governance and Administration (IGA) products and services.
These machine entities, deployed in roles such as customer service assistants, legal aides, or documentation tools, require access to sensitive systems and data. Yet they also introduce complexity for accountability, data minimisation and ethical use.
The emergence of AI agents is quickly reshaping traditional Identity Governance and Administration (IGA) frameworks, and in tandem, organizations are also grappling with new regulatory challenges. Additionally, data sovereignty and localised infrastructure are becoming strategic priorities for EU bodies.
There’s a duality of AI in IGA: it is both a governance risk (requiring role-based control and monitoring) and a governance tool (capable of analysing access patterns, improving decision-making and simplifying user experiences). Organizations must learn to govern these new identities while also harnessing AI to improve security and operational efficiency.
The regulatory backdrop
One regulation looming large in the European Union is the upcoming Network and Information Security Directive 2 (NIS2), which is driving a re-evaluation of cloud versus on-premises deployments. The goal of NIS2 is to improve member states’ cybersecurity by instituting a shared level of security for network and information systems. This directive provides an update to the original NIS Directive and mandates that companies establish more stringent cybersecurity measures and protocols for incident reporting.
This intersects with sovereignty concerns in the sense of striking a balance between availability and control. Regulations like the EU AI Act’s ethics rules are also influencing how AI systems are governed. Organizations must balance compliance, security and availability when choosing infrastructure strategies in light of NIS2 and similar regulations. Here, geo-security and data sovereignty in the public cloud, as well as the vast amount of virtual on-premises infrastructures, are key to corporate risk assessments and in balancing risk appetite and cybersecurity resilience.
Generative AI’s new identity challenge
The market for AI agents is growing rapidly, with one research firm forecasting that the market for AI agents will grow from $5.1 billion (as of last year) to $47.1 billion in 2030. That’s a huge jump. AI agents are being integrated into organizations across sectors faster than ever; a survey by AI developer platform LangChain found that about half (51 per cent) of organizations surveyed were currently using AI agents in production; 63pc of mid-sized companies are running live workloads via AI agents. This trend isn’t slowing down anytime soon and the determining factor in whether these projects succeed or fail will be how carefully companies consider their identity and governance models.
AI agents are now embedded in legal, finance, healthcare and customer service workflows – and they require access to systems, making them new types of identities. These are machine or non-human identities, and they need the same governance rigour as human accounts. Risks include over-permissioning, accountability gaps and ethical misuse. There’s a link to compliance, too; minimizing data exposure is both a legal and a security imperative.
Managing the subtleties of authorization is a prime element of AI agent-based ecosystems. Imagine the needs of an AI agent tasked with updating one or more employees’ calendars. This task requires having access to events, schedules and people’s availability. Another agent needs to gather sensitive financial data and perform transactions as it manages investment portfolios. In a third scenario, an AI agent needs to manage code repositories, merging bug fixes or new features.
Here’s a real-world example that highlights the need for identity governance. In the healthcare industry, nurses and doctors need access to patient records, but they should only open the records that they have a professionally justified reason to see. Identity managers need to establish these controls for people, and they must do the same for AI agents.
It’s not secure or sustainable to grant indiscriminate, blanket access to all these resources and data sets. If, for instance, the AI agent managing investment portfolios also had access to source code repositories or calendar information, that could lead to regulatory compliance failures and/or data leakage. The real issue extends beyond verifying the identity of an agent; it’s about the ability to control every agent’s authorized behaviour in an ever-changing landscape.
Companies need a new approach to orchestrate all these human and machine identities. Next-gen IGA tools offer lifecycle automation, granular policy management and advanced authorization workflows. They combine all the information enterprises need to set and enforce rules for all types of identities. And to make sure permissions and trust boundaries stay in place as AI agents scale, IGA tools need to seamlessly mesh with leading-edge AI frameworks and generative models.
IGA for AI agents
Generative AI is here to stay, bringing the promise of both business gains and security risks. AI agents need their own digital identities to perform their duties, but this muddies the compliance and safety waters. How can you give agents the permission to do what they need to do while also keeping corporate IP and data secure? The short answer is that next-gen IGA is getting an upgrade to consistently and dynamically manage all identities in an organization’s ecosystem. IGA users must raise demands and retain control over their data and information assets.