TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

AI impact on offensive and defensive cyber

by Mark Rowe

Phil Robinson, Principal Consultant and founder of the cyber and information security testing consultancy Prism Infosec, writes of how AI will impact offensive and defensive cybersecurity.

The number of businesses using AI for cybersecurity is low according to the AI Cyber Security Survey, which found only 3pc of businesses are using it in this capacity today. But there’s no doubt that adoption is set to grow rapidly as organisations seek to benefit from efficiency gains, offset talent shortages and stay one step ahead of attackers.

Economic cutbacks will force organisations to embrace automation and AI and a major skills gap in cybersecurity (which stands at four million globally and rising at a rate of 12.6 per cent per annum according to ISC2) will likewise see them turn to technology to solve staff shortages.

The weaponisation of AI is also off to a slow start but it too is coming, with the NCSC warning that AI-driven attacks can be expected to become more prevalent in around 18 months’ time at which point the technology will be more widely used for reconnaissance and social engineering.

But just how can AI help in terms of offensive and defensive security practices? Will it simply augment the workflows we have today or could it lead to a dramatic change in the way we respond to security threats?

Offensive use cases

From an offensive perspective, we can expect AI to focus on even wider datasets from large estates and a myriad of cloud devices. It will be much more efficient than its human counterparts in identifying vulnerabilities in disparate data sets from applications, infrastructure, configurations, and cloud settings and flagging different scan outputs.

For example, one method of testing the robustness of web applications is to subject them to a web vulnerability assessment and penetration test (Web VAPT). This typically involves carrying out thousands of brute force requests looking for vulnerabilities in various combinations of parameters and input data. While current solutions do have some success, recent research carried out by students Alberto Castagnaro, Mauro Conti, and Luca Pajola from Cornell University reveals that LLMs could be used to perform the pen test with an average performance increase of 969pc.

AI will also be useful for identifying attack vectors and cross referencing these with large resources such as NIST’s NVD and other vulnerability databases for threat hunting. Doing so will help connect the dots, allowing seemingly disparate issues to improve threat intelligence.

Red teaming is of course at the heart of true offensive security measures and AI is now being used in this capacity in a number of ways. To start with, LLMs are helping with reconnaissance by identifying CVEs and avenues to explore to attack particular systems. This dramatically reduces lead time on bespoke payload creation and development time in general when implementing various evasion techniques such as event tracing for Windows (ETW) and dynamic link library (DLL) unhooking as well as other obfuscation techniques.

Attack simulations can likewise be automated when it comes to developing exploit code. Public code needs to be developed by a security researcher into something like an overflow attack or race condition which can then be used by the red team to pinpoint weaknesses but such interpretations can easily be performed via AI.

Defensive use cases

In terms of defence, AI is already being used to analyse big data as part of incident response operations. It has the ability to perceive patterns across much larger data sets than a human team, with systems typically fed from multiple sources such as endpoints, servers, cloud data, security devices, firewalls, boundaries and gateways.

Consequently, AI is able to rapidly detect threats such as malware, for example, with the Empowering Defenders: How AI is shaping malware analysis report revealing that it can detect 70 per cent more scripts than traditional techniques and it was 300pc more accurate at detecting malware targeting devices using a common vulnerability or exploit when observed over a six month period. This is because the AI is not solely focused on endpoint data but on all the other factors that go into crafting malware such as the toolsets malicious actors use.

Using AI, threat detection and incident response (TDIR) systems can determine the level of confidence in a breach scenario and classify alerts accordingly. They can allow security analysts in the Security Operations Centre (SOC) to interrogate data much more naturally and can provide those analysts with potential remediation paths. So not only does AI help find the root cause of an alert faster but it can also improve escalation, driving down mean time to response (MTTR) and costs.

These SOC teams are also now significantly under resourced. A recent IDC report found 89 per cent of UK organisations have insufficient SOC analyst skills and the gap is widening. Here AI can again help by enabling entry level personnel to undertake analysis that was previously the preserve of experienced team members. In fact, the same survey found that by the end of this year, 30pc of large European companies will have deployed AI on first-party data in their SOCs.

Future impacts

As a result, the expectation is that the SOC will become less focused on incident response and triage and attend more to threat hunting and exploring possible attack evolutions, making cybersecurity less reactive and more proactive. However, against these gains its necessary to bear in mind the advances being made by malicious actors. The NCSC have warned that attacks will become more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, data they will then use this to train their AI models.

What this indicates is that security teams need to be fully exploring offensive and defensive capabilities now and organisations should be taking steps to put such processes in place. The woeful lack of businesses using the technology in this capacity today is cause for concern, but this is largely down to a misconception that implementing AI requires investment in new point solutions. In reality, much of these benefits can be gained simply by enabling security teams to be more ambitious in their use of AI which can be done safely provided the necessary ground rules are in place through such a framework such as the NIST AI RMF.

Related News

  • Cyber

    Remediation for ransomware

    by Mark Rowe

    An annual report released by tech firm Microsoft found that the number of ransomware attacks has more than doubled over the last…

  • Cyber

    Convergence of threats

    by Mark Rowe

    Intelligence is the cornerstone of physical-cyber threat protection, writes Lewis Shields, Director of Dark Ops at the cyber platform ZeroFox. Threats within the…

  • Cyber

    Encryption in cyber frameworks

    by Mark Rowe

    Without encryption, any data shared within networks (and over the internet) is at risk of being manipulated by hackers. Although there are…