Diversity is vital to battle generative AI-armed hackers, says Dr Andrea Cullen, CEO and co-founder, CAPSLOCK, which offers cyber training ‘boot camps’.
The price a Hong Kong business had to pay in September when an employee inadvertently was duped by a video call with a deepfake version of their CFO was $25m. This is an extraordinary example of how generative AI (genAI) is increasing the sophistication of attacks within the cyber landscape. Of course, not all will be at this level. But, according to the National Cyber Security Centre (NCSC), all types of cyber threat actors – whether they are state or non-state, skilled or less skilled – are now using AI to some degree.
For cyber teams, genAI is introducing a complex and rapidly evolving threat landscape for organisations. Issues such as misinformation and deep fakes are becoming increasingly common and as genAI becomes more prevalent in the workplace, data protection concerns are also emerging as a critical concern for organisations using open-source AI tools. However, it’s not just the criminals using genAI to their advantage. Cyber leaders are harnessing the technology to identify weaknesses in their attack surfaces and to improve their detection and triaging of attacks and malicious campaigns.
They’re also getting on the front foot by establishing clear policies on the appropriate uses of AI in the workplace and prioritising training in AI literacy and safety for employees so security risks can be avoided, such as the workforce not falling foul to voice clones or using insecure third-party chatbots that may give attackers unauthorised access to networks.
Adapting technologies and policies to address the emerging threat landscape shaped by the rise of genAI is crucial. But this alone will not be enough to battle AI-armed hackers. Just as important is the makeup of a cyber team itself. Why? While the AI threat may be digital, its driver is human. That means you need people who can get into the minds of hackers. To understand them. And to outthink them.
Diversity is key to this. Teams must be formed of people from different backgrounds and who have different experiences and skill sets, so they approach AI threats in innovative ways. This will enable teams to avoid blind spots by bringing together different perspectives and critical thinking to tackle new challenges with creativity and ingenuity.
The problem with traditional hiring routes
If you work in cyber, though, you’ll already know that many teams aren’t diverse. Sometimes this is down to the broader issue of the UK skills gap, given 30% of cyber firms in the UK say they have faced a problem with a technical skills gap. We also know that a catalyst for this skills gap is a grassroots issue that inhibits getting more diverse representation into this traditionally white male-dominated career path.
However, it is also a hiring issue. And that is something that can be remedied almost immediately to build more resilient teams in the fight against rising genAI threats. When a cyber role is advertised, you’ll see heavy emphasis placed on the candidate having completed a relevant degree and even for an entry-level position, having some level of industry experience (sometimes three years!). Continuing in this vein means that hiring managers risk narrowing the talent pool and continuing to hire the same ‘cookie cutter’ version of what they think is an ideal cyber security employee. The homogeneity of a traditional cyber team, made up mostly of white, middle-class men can jeopardise an organisation’s security. With a host of similar worldviews, these teams may lack the diversity of skills and ideas to tackle rising genAI threats and get into the minds of hackers.
The importance of hiring impact skills
The hiring process is a place where a lot of this progressive change can happen. Ensuring hiring managers are prioritising skills and experience over qualifications can help encourage career changers and those from non-traditional cyber routes into the industry. By prioritising these types of candidates, cyber leaders can bring unique insights into how cyber threats impact different business areas.
Organisations are often well-versed in placing experienced hires, graduates or apprentices. However, they often lack a defined path for experienced professionals with new cyber skills. For example, project managers transitioning to cyber project managers often don’t fit into predefined hiring categories. By opening these paths, organisations can leverage the experience of individuals who can apply their knowledge to cyber-specific challenges.
This approach also means the hiring focus does not solely focus on individuals with advanced technical skills, aligning with the outdated perception that only individuals equipped with this will succeed. In reality, those skills are often the easiest to teach and continuous technical training is needed, regardless, because of the new threats emerging thanks to genAI and other technologies. What the industry needs is impact skills, which range from creativity and problem-solving to critical thinking, which aid teams to get into the minds of the hackers using genAI.
Final thoughts
Addressing the risks posed by genAI to the cyber security industry requires breaking down barriers to professional cyber education. The UK’s cyber security workforce is notably understaffed, limiting the capacity to detect and respond to cyber threats. To address this, hiring teams must look for candidates from non-traditional cybersecurity backgrounds, expanding the talent pool to include diverse skills and perspectives. This approach is vital to building diverse teams capable of tackling ever-evolving AI-related cyber threats.




