A growing disconnect between rapid AI deployment and formal security testing coverage amounts to an ‘AI Security Gap’, according to a US-based platform that offers bug bounties.
When testing fails to keep pace, organisations lose visibility into what is truly exploitable, argues HackerOne. As AI systems integrate with APIs, tools, and enterprise data sources, exposure can increase disproportionately, especially when testing does not scale alongside deployment, the firm warns. It suggests that continuous testing is becoming not just a security best practice, but a governance requirement.
Kara Sprague, CEO of HackerOne said: โAI systems are dynamic, evolving with every model update, integration, and data connection โ and the same is true of modern digital systems overall. As systems become more interconnected and adaptive, risk evolves in real time. Periodic testing assumed stability. Todayโs reality requires continuous testing so leaders can detect change, identify whatโs exploitable, and mitigate risk before it materialises.โ
Data theft
A survey by Illumio Inc of 700 in IT and security across North America, Europe, Asia-Pacific, and Latin America found data and intellectual property theft is the most cited concern (57pc), followed by targeted attacks designed to disrupt critical services (56pc); AI-driven attacks โ including deepfake impersonation comes third (cited by 55pc); and in fourth ransomware and extortion at 53pc. Near all, 95pc of those surveyed say they can detect unauthorized lateral movement. Raghu Nandakumara, Vice President of Industry Strategy at Illumio, added: โMost organisations can spot an intrusion, but stopping it is a different story. AI is making attacks harder to interpret and contain, which means even small footholds can escalate fast.
Online surveyed
Meanwhile, a biometric authentication vendor suggests that AI-generated impersonation is increasingly seen as a real-world threat undermining confidence in what people see online. From a survey of 2,000 people across the UK and the US in early 2026 nearly half of respondents (48pc) said they now question the authenticity of โalmost everythingโ they encounter online.
Some three-quarters (74pc) of consumers say they would switch banks if a competitor offered guaranteed protection against deepfake-enabled fraud. Some 41pc of those aged 25 to 34 say they would switch immediately, compared with just 14pc of those aged 65 and older. Andrew Bud, founder and CEO of iProov said: โAI has blurred the line between real and fake in digital ecosystems, and too many organisations are caught off guard. This study highlights a major shift in consumer sentiment, showing that generative AI is actively undermining the credibility of the institutions people have traditionally relied upon. Deepfakes are quickly undermining the trust at the heart of the digital economy, ultimately compelling consumersย to change their behaviours and, importantly, who they are willing to do business with.โ
CISO view
Amy Lemberger, former FTSE-250 Chief Information Security Officer and founder ofย The CISO Hub, says:ย โAI is exposing whether security is genuinely part of leadership thinking. When adoption moves quickly, governance must keep pace. That requires senior level judgement and thinking, not just tools.โ
She points to recent global risk reports that AI is rising sharply on corporate risk registers alongside cyber disruption and operational resilience. Boards are asking new questions about data usage, model integrity, and third-party exposure.





