As threats evolve, research-based cybersecurity fights back, writes Shobhit Gautam, pictured, Staff Solutions Architect, EMEA at the bug bounty platform HackerOne.
It’s been well over 30 years since the first ransomware attack – a trojan distributed via floppy disk – was brought to public attention, well before the Internet became the cybercrime hub it is today. Fast-forward from 1989 to the present day, and ransomware has become a ubiquitous problem, with attacks taking up to 297 days to contain and costing victims up to $5.4 million in the process, according to research from IBM.
Despite a recent fall in the overall volume of ransomware incidents, other important indicators are less encouraging, with one study pointing to a 56 per cent year on year (YoY) rise in active ransomware groups in the first half of last year as evidence that the problem is far from over. There has been an increase in smaller ransomware groups working in harmony with each other. Unlike in past years where larger groups have operated more like an organisation, smaller groups are now providing specific functions.
There are now initial access brokers that perform difficult tasks, such as identifying potential targets and performing social engineering and other exploitation techniques to gain access to victims, which are then passed on to ransomware droppers and handlers that manage the negotiations. This method creates a complete scheme of Ransomware as a Service, streamlining attacks and reducing a group’s workload at the beginning of an attack.
If this wasn’t enough, threat actors are increasingly turning to AI as a way to increase the sophistication and volume of attacks. As AI becomes commonplace and more tools are created to harness it, there is no longer a need for technical expertise to perform sophisticated attacks. Armed with just a basic knowledge, any criminal is now capable of performing large-scale attacks. As pointed out by the National Cyber Security Centre (NCSC) in a report last year, “All types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI, to varying degrees.”
From the perspective of cybersecurity professionals, this raises huge concerns. The 2024 Hacker Powered Security Report found that almost half (48pc) of security leaders now view generative AI as one of the most significant risks facing their organisations today. According to the report, only 38pc of organisations felt confident in their ability to defend against AI-related threats, and just 39% believed that legislation would make AI safer.
Serious about research
In this context, organisations looking to create or enhance an anti-ransomware strategy have various challenges to address. One of the most problematic is security team resourcing and, more specifically, access to the deep levels of experience required to identify, contain, and mitigate sophisticated, AI-powered ransomware threats before they can cause significant operational or financial damage.
This represents a serious shortfall in expertise, with 79pcof security professionals making critical security decisions without full visibility into the threats they face. At the same time, only 35pc of security professionals say their organisation has a comprehensive understanding of the threat landscape, according to industry figures. Add to this the perennial skills gap facing the cybersecurity industry, and as far as ransomware is concerned, many organisations are running just to keep still.
To keep up, proactive security is essential, and defence-in-depth is best. Defence in depth is an approach security mature organizations use to weave proactive security efforts throughout their software development lifecycle — so each layer continuously catches vulnerabilities to reduce risk from code through deployment. One way for organizations to embrace this best practice, regardless of security maturity, is to start by engaging a platform that offers access to offensive security solutions that can rigorously test applications, networks, and systems across their software development lifecycle.
Offensive tests are often performed by crowdsourced security platforms, which engage a global community of independent security researchers to test systems. Methods this type of platform uses include automated and manual vulnerability scanning to identify known weaknesses, penetration testing to simulate real-world attacks, and in-depth code reviews to detect programming flaws. Techniques used by researchers include the use of advanced automated software testing techniques that inject random, unexpected, or malformed data into systems to identify security vulnerabilities, crashes, or unexpected behaviour. They mimic bad actors to find flaws before they can be exploited. And demand for these services is growing; for one leading provider of these services, use of penetration testing was up by 67pc in the past year, with each uncovering an average of 12 vulnerabilities per engagement – 16pc of which were classified as high or critical.
Engaging security researchers for offensive security testing through each individual stage of the software development lifecycle ensures that an organisation’s security strategy is continuously informed by key insights into vulnerabilities and remediation recommendations. Continual engagement with security researchers through services such as pentests, code audits or bug bounty reports will provide businesses with the tools they need to strengthen their security posture through each developmental stage, fortifying their defences one layer at a time.
Responsible regulation
To fight a new era of AI super-charged threats, governments also have a key role to play. Compared to other sectors, the AI-specific regulatory environment is still in its infancy, but existing laws, such as the Data Protection Act 2018 and the General Data Protection Regulation (GDPR), already ensure that organisations adhere to strict data protection principles and are held accountable in the event of a breach. Further legislation and regulation are already on the table, and it’s incumbent on authorities to strike the right balance between fostering innovation and delivering robust security measures that protect businesses and individuals without imposing unnecessary burdens.
Authorities are also strengthening their collaborative efforts to combat cybercriminals operating across borders. This includes the rollout of vulnerability reward programs to encourage responsible disclosure and help reduce overall risk. That said, any legislation must strike a careful balance between maintaining high-security standards and avoiding undue burdens on businesses – otherwise, compliance could become a secondary priority, ultimately increasing risk across the board.
Ultimately, there’s no question ransomware will continue to proliferate as cyberwarfare advances in this new AI-powered landscape. As organisations grapple with AI adoption over the next few years, collaboration between the public and private sectors and researcher community will be imperative if we want to build a safer internet.



