Is AI enabling malicious Zero Day hunting? asks Alex Harrison, Penetration Tester at Barrier Networks.
Generative AI has transformed the digital world. Quality content can be produced in seconds, tedious customer service tasks can now be fully automated, while the government is turning to AI to alleviate the strain repetitive jobs place on the public sector, such as identifying potholes in roads.
This has improved efficiency across industries, while introducing significant cost savings and supporting workforces. But with AI offering such support for legitimate organisations, it’s not surprising criminals have also set their sights on the technology to support their malicious endeavours. We are seeing criminals turn to AI to generate highly sophisticated phishing and social engineering scams. This allows malicious actors to generate malicious emails at scale, creating content which is word, tone and image perfect, not exhibiting the usual telltale signs that an email may be coming from an untrusted source.
The uptick in AI-generated phishing and social engineering has also unearthed questions about how the technology can be used to support other areas of cyber crime, such as generating malware, identifying vulnerabilities in software and being used to scan infrastructure to discover exploitable Zero Day vulnerabilities.
Big Sleep – Zero Day in SQLite
In November, Google announced that its AI agent had uncovered a previously unknown Zero Day vulnerability in the SQLite Database Engine. This marked the first publicly disclosed instance of AI independently discovering an exploitable Zero Day security flaw. Google’s Project Zero, which is renowned for its highly-skilled team of security researchers, and DeepMind, Google’s AI research division, collaborated to develop Big Sleep, a Large Language Model-powered vulnerability detection agent.
This AI system successfully identified an exploitable stack buffer underflow in SQLite, a widely used open-source database engine. The vulnerability was reported to SQLite developers in October and was patched the same day, which meant that no users were impacted by the flaw. However, the discovery undoubtedly raised worrying questions.
If Google security researchers could find a Zero Day using AI, did this mean threat actors would soon be turning to the technology to do the same? Fortunately, for today at least, while Google’s research was undoubtedly a world first, it’s unlikely to spur criminals to turn to AI as part of their vulnerability hunting. The biggest challenge for attackers comes down to the GPU power that is needed to find Zero Day vulnerabilities in code using AI.
Identifying patterns in millions of lines of code is computationally expensive and it requires substantial resources to analyse large codebases, simulate attacks and optimise exploit strategies. This means criminals would need specific infrastructure to hunt for large scale Zero Days, infrastructure which global organisations like Google possess and can afford, but is unlikely to be available to even the most sophisticated of criminal gangs.
Criminals could use consumer hardware to scan for vulnerabilities, but it would be a slow process, and they would likely achieve faster and better results, with greater financial returns, using tools that are already on the market, such as Nmap, Masscan, or ZMap. While concerns about AI being used for malicious Zero Day exploitation persist, the more immediate threat still lies in AI-generated phishing and social engineering attacks.
AI enables criminals to craft deceptive emails and phishing messages at scale, making it harder for users to distinguish real communications from fraudulent ones. This allows criminals to work faster and more successfully, causing damage to more organisations, while seeing improved financial returns. Although organisations and developers should remain vigilant about AI-driven vulnerability discovery, the current priority should be on strengthening defences against AI-enhanced phishing and social engineering attacks, as these present a far greater challenge in today’s digital landscape.
AI enabled Zero Day hunting could present a major threat in the future, but for now criminals are more likely to continue focusing on their tried and tested methods, which still yield the desired results, without overburdening their resources.



