For years, security awareness training relied on a simple principle: most scams reveal themselves if you look closely enough. Poor grammar, slightly incorrect branding, awkward phrasing or unusual requests were often enough to alert a cautious employee that something wasnโt right. Training reinforced this approach, encouraging people to โspot the red flagsโ and pause when something felt off. That approach worked for a while, writes Mark Hamill, VP of Product, MetaCompliance.
AI has quietly removed one of securityโs oldest advantages: imperfection. Today, malicious emails and messages can be written with flawless grammar, accurate tone and even references to genuine projects or colleagues. This isnโt a marginal improvement in attacker capability; itโs a step change. AI makes phishing emails context-aware, scalable and far more likely to succeed. You’re no longer just looking for something that looks suspicious, but recognising that something which appears entirely normal may still require verification.
This shift changes more than the appearance of threats, it changes the way organisations need to think about trust.
Realism Becomes the Risk
Traditional awareness programmes have focused heavily on anomaly detection. If something looked unusual, employees were taught to question it. If the wording felt unfamiliar, they were encouraged to double-check. AI makes that model less reliable. It produces clean, credible communication that blends seamlessly into everyday workflows. Messages can arrive at the right time, use the right language and reference real context, all while being malicious. They can also appear to come from someone you trust, not just in writing, but in voice and video. AI-generated deepfakes can replicate tone, accent and even facial expressions in real time, making impersonation far more convincing than traditional social engineering. In that environment, visual or linguistic cues are no longer dependable indicators of risk.
Instead, organisations need to emphasise verification as standard practice. Confirming instructions through a separate channel, validating payment changes before processing them, or double-checking authorisation before granting access should become routine behaviours rather than exceptional ones. The emphasis needs to move away from spotting obvious mistakes and towards routinely verifying where requests originate.That distinction may seem subtle, but it has significant implications for how security education is designed and delivered.
Assistant is Now Part of the Attack Surface
At the same time, AI is becoming embedded into everyday workflows. Digital assistants summarise information, generate responses, connect systems and, in some cases, trigger actions on behalf of users. These tools increase productivity and reduce friction, but they also shorten the window for human judgement.
In some scenarios, attackers no longer need to persuade someone to click a link or download a file; influencing what an automated system accepts as legitimate can be enough. And if a digital assistant processes manipulative instructions, such as a prompt injection designed to override its guardrails, it may trigger downstream actions at machine speed. Prompt injection isnโt a new category of threat so much as a new delivery mechanism for a familiar attack pattern: persuading a system to behave in ways it was never intended to.
This effectively expands the attack surface. Itโs no longer limited to the individual user; it includes the systems acting on their behalf. While a person might pause and question an unusual request, an automated tool simply executes the instructions it receives. Without appropriate safeguards, small manipulations can scale quickly without proper guardrails in place.
Why static training falls short
Despite these changes, many organisations still treat security awareness as a compliance requirement. Annual training modules are completed, phishing simulations are run and completion rates are reported to senior leadership. While these metrics provide reassurance, they rarely reflect how people behave under real-world pressure.
Completing a training course doesnโt necessarily mean someone will verify a well-crafted, contextually accurate request generated by AI. Passing a phishing simulation doesnโt guarantee sound judgement when an automated system requests expanded permissions or attempts to access sensitive data.
The irony is that most organisations already hold valuable behavioural data. Collaboration platforms, identity systems and productivity tools reveal patterns in how employees work, where automation is relied upon and when high-risk actions occur. Connecting learning to these real behaviours offers a far more effective approach than delivering generic content detached from daily activity.
Closer to the Moment of Risk
To remain effective, security awareness needs to sit closer to decision-making points. Rather than relying solely on scheduled training sessions, organisations should build in contextual prompts and safeguards that appear when risk is highest. If an employee connects an AI assistant to a sensitive dataset, that action should trigger clear guidance. If someone attempts to approve a significant financial change, an additional confirmation step should be required. If a digital assistant requests broader access rights, that request should be transparent and deliberate.
These measures act as practical circuit breakers, introducing friction at precisely the moments where mistakes are most costly. Short, situational prompts delivered in the flow of work tend to have greater behavioural impact than lengthy modules completed months earlier.
Importantly, this approach doesnโt imply a lack of trust in employees, but rather acknowledges that modern threats are designed to look routine and that automated systems can amplify small errors.
Designing for doubtย
AI hasnโt fundamentally changed the nature of cyber threats, but it has made them more convincing and faster to execute. In doing so, itโs challenged long-standing assumptions about how employees detect risk and how organisations measure preparedness. Relying on individuals to identify subtle flaws in messages is no longer sufficient. Instead, organisations must design systems and training that assume realistic communication will pass an initial glance. Verification should be built into workflows, and automated actions should include meaningful safeguards.
In an AI-enabled workplace, trust shouldnโt be based on how polished a message appears, but on how thoroughly itโs been confirmed.




