The world’s most comprehensive Artificial Intelligence (AI) regulation to date, the EU’s AI Act has far reaching consequences for businesses develop, deploy and govern AI. It applies to every business operating in the EU, from large global enterprises to fast-scaling startups, all of whom must comply with the risk-based regulatory framework, writes Randolph Barr, CISO at the API security product company Cequence Security.
Introduced in June 2024, the Act is due to become fully applicable in June 2026 although some parts will be brought in before that date. Elements that have already come into force include a ban on AI systems that pose unacceptable risks from February 2025, Codes of Practice from March 2025 and rules on general purpose AI systems that need to comply with transparency requirements from June 2025. High risk systems will have more time to comply – up until June 2027 in some cases.
The risk-based nature of the regulations mean that, from the CISO’s perspective, it’s not just another obligatory set of regulations but an unprecedented opportunity to elevate security, risk, and governance functions. The AI Act provides the mandate that many of us have been waiting for to move data governance, observability, and AI lifecycle security from back burner initiatives into board-level priorities.
One of the most important things to recognise is that the EU AI Act applies a risk-based framework. The main purpose of the regulations was for the EU Parliament to ensure AI systems used in the EU are “overseen by people, rather than by automation, to prevent harmful outcomes” which reveals the high priority given to maintaining governance and oversight. The framework itself further identifies the level of risk associated with the AI through the use of two primary classifications – General-Purpose AI (GPAI) models and High-Risk AI systems – and it’s these that companies will need to understand.
What is GPAI?
GPAI models are foundation models that are capable of performing a wide range of tasks and are not built for one specific domain. These include language models, image generators, and multimodal systems used as a base layer for a variety of downstream applications.
If your organisation builds, hosts, or offers access to such models, even indirectly, you likely fall under the GPAI requirements. Some examples include cloud-based AI platforms that offer developers pre-trained models for customer service bots, code generation tools, image captioning, or document summarisation. These are designed to be integrated into countless end-user solutions and are often trained on massive, mixed datasets.
The key point here is that the compliance deadline for GPAI models is only a few weeks away – 2 August 2025. Providers of GPAI must produce technical documentation that outlines how the model was trained, what data was used, what limitations exist, and how downstream users are expected to safely use and integrate the model. This includes transparency documentation, data provenance, and efforts to address copyright obligations. For companies in this category, the work must already be underway. Because, while the date is weeks away on paper, the scope of preparation needed demands immediate action.
How does high-risk AI differ?
High-Risk AI systems, on the other hand, are those used in sensitive or regulated domains where incorrect or biased outcomes could significantly impact people’s rights, freedoms, or safety. These systems might be used to determine eligibility for loans, screen job applicants, assess students, support law enforcement decision-making or make medical diagnoses. If your company builds or deploys any AI functionality in these areas within the EU, you’re operating in a high-risk category. These systems are subject to the most rigorous compliance obligations under the Act — including technical documentation, real-time monitoring, incident tracking, human oversight, and long-term record keeping.
What really stands out for me is that the deadline for High-Risk AI systems is August 2, 2026. It sounds like it’s still more than a year away, but from an operational readiness standpoint, we are already running out of time. The documentation required must be detailed, version-controlled, and maintained for up to ten years after the system is last made available on the market or deployed in the EU. This includes full system architecture descriptions, risk mitigation strategies, descriptions of training and validation data, performance evaluations, and change logs. That’s a high bar, especially for organisations that haven’t historically treated model development with the same rigour as traditional software engineering.
Added to that are the technical requirements for logging and monitoring. Most companies have implemented logging to meet security, availability, or audit goals—but the EU AI Act elevates this to a regulatory requirement. You will need to demonstrate traceability of system decisions, incident logs and even runtime metrics over extended timeframes. For organisations that haven’t treated observability as a compliance issue before, this represents a significant shift. Therefore, it’s critical that internal audit, GRC, and compliance functions begin mapping out how these logs will be captured, retained, and verified because regulators will ask for them.
Deep dive on data
What’s equally important – and often overlooked – is that this regulation goes deep on data governance. The AI Act requires that all training, validation, and test datasets used for high-risk systems be documented, relevant and free from bias as far as technically possible. This includes proving that data was legally sourced and aligns with licensing and copyright protections. It also means actively mitigating bias and ensuring that the data reflects the population the AI is meant to serve. That responsibility can no longer sit solely with data science or product teams. Security and compliance leaders must now be part of the conversation about how data is collected, governed, and risk-evaluated.
In my role, I see the EU AI Act as more than regulation: it’s a trigger for transformation. It legitimises long-overdue conversations around AI trust, model governance, and data accountability. It allows security leaders to justify investment in documentation frameworks, system observability, and cross-functional collaboration with legal, privacy, and engineering. But the window to act is closing quickly.
To be clear, this perspective comes from my experience and observations as a CISO, not a legal practitioner. Every company should evaluate the Act in close collaboration with their legal advisors to ensure their obligations are fully understood and appropriately addressed.
If your organisation is building or using AI systems with reach into the EU, you cannot afford to treat this as a distant issue. Whether you’re deploying general-purpose models or operating high-risk systems, the time to act is now. This is your opportunity to build trust, strengthen internal alignment, and prepare your organisation for the future of AI, where accountability isn’t just expected, it’s enforced.




