TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Preventing AI bias – beyond the algorithm

by Mark Rowe

AI’s potential to transform nearly every aspect of our world is irrefutable – from our personal and professional lives across literally every sector of society- the corporate world, scientific research, healthcare education, and the creative industries, writes John Trest, Chief Learning Officer at the cyber product firm VIPRE Security Group.

The race is no longer only between technology developers; its development is rapidly gaining importance at nation-state levels as nation-led AI ecosystems compete to secure the biggest possible slice of the pie.

The “AI bias” problem

The technology’s ability to process vast amounts of data and identify patterns beyond human capability is leading to breakthrough discoveries and innovations across fields. However, there are inherent flaws in these AI systems, reflecting and even amplifying the biases present in their training data. These biases lead to unfair or discriminatory outcomes that disproportionately affect socially disadvantaged or under-represented groups.

These training data-led biases manifest in various ways – from facial recognition systems performing poorly on certain demographics to language models perpetuating harmful stereotypes. The presence of such biases has rightfully raised concerns among potential adopters across industries, as biased decisions can have serious real-world consequences.

AI bias can be a security vulnerability

Cybersecurity is no exception. In this industry, AI plays a complex and nuanced role that goes beyond simple pattern recognition. Very simplistically, AI today is well-used for identifying and responding to cyber threats. However, if the AI system is biased, it can severely compromise the effectiveness of the tools that embed this technology.

AI security solutions can overlook emerging threats or indeed ignore them because their predefined patterns don’t fit with the model. The use of slang terminology in an email may inaccurately get flagged as a phishing attack. The operational impact of false positives on security teams is real and concerning. Biased AI can trigger an overwhelming number of false positives, leading to alert fatigue among security personnel. Conversely, these systems might miss genuine security incidents, creating dangerous blind spots in an organisation’s defence posture. This dual problem of over-sensitivity and under-detection undermines the very purpose of AI-powered security solutions.

AI malware systems trained on flawed data may generate prejudiced outcomes where certain users or behaviours are unfairly targeted by security measures, creating an inequitable environment for different groups of employees in the organisation.

Likewise, biased AI models can lead to tunnel vision, where there is undue focus on certain types of threats while others are ignored. An AI model fixated on detecting external threats could ignore insider threats. Similarly, the AI model training, if biased towards identifying foreign threats could overlook domestic cyberattacks, which are just as prevalent.

AI bias can also cause security systems to focus on specific attack vectors or symptoms, missing new or evolving threats. This is particularly dangerous given how quickly cybercriminals adapt their tactics.

Addressing bias requires careful dataset curation, diverse development teams, and robust testing frameworks. When selecting AI solutions for cybersecurity, organisations will do well to query and interrogate the approach the technology vendors are taking toward these areas.

What is the scope of the data sets used to train the AI models? What kind of threat scenarios, user behaviours, and attack patterns are included? How vast is the scope of the regions and industry sectors covered? Are regional and domestic attacks equally looked at? How much data will be shared if the model is hosted by a third-party?

Transparency and explainability are key elements of development of any AI model. The AI models used in cybersecurity solutions must be easily understood. Query what guardrails have been set for the AI model? How is the model used in the decision process so that accountability can be tracked? Will the enterprise data be used to train future AI models?

Another area that is critical while evaluating AI cybersecurity solutions is understanding the balance between the technology and the level of human oversight in product development. It’s important of course to leverage the computational power of AI, but equally vital is the use of human supervision and judgment at critical decision points. This dual-layer approach helps safeguard against algorithmic bias and enhances the accuracy of threat assessments. The human element provides a crucial contextual understanding that machines alone cannot replicate, resulting in more dependable security decisions. The EU AI Act mandates human involvement as a fundamental requirement in the development of AI solutions.

You’ll want to consider asking questions of your cybersecurity solution provider, such as how does your solution prevent AI from generating false positives?; how is human monitoring and intervention enabled in automated security decisions?; what safeguards are in place to prevent the AI from making critical decisions in the absence of human review?; and so forth. Answering these questions will help assess whether your provider is responsibly ensuring human oversight.

Cybersecurity is a dynamic environment and therefore, it’s imperative that the training of AI models keep pace with the rapidly evolving threat landscape. Aside from the frequency of feeding new data to the AI-powered cybersecurity solutions, it’s worth enquiring if feedback loops are integrated into the AI system, and what their scope is. Will your organisation be able to easily report anomalies or inaccuracies to help continuously improve the efficacy of the solutions adopted?

Prevention of AI bias in solutions is the keystone to both the ethical development and adoption of this powerful, game-changing, and intelligence-driven innovation. This technology’s developers and adopters have an equal role to play in ensuring that it is appropriately used for the greater good of society.

Related News

  • Cyber

    Predictions for 2025

    by Mark Rowe

    The cyber threat landscape has grown ever more intricate – characterised by AI-driven attacks, the advance of ransomware tactics, supply chain vulnerabilities,…

  • Cyber

    Interpol’s financial fraud assessment

    by Mark Rowe

    Artificial Intelligence (AI), large language models and cryptocurrencies combined with phishing- and ransomware-as-a-service business models have resulted in more sophisticated and professional…

  • Cyber

    Cyber-resilient cultures needed

    by Mark Rowe

    Video security systems have become a critical asset across industries, with leaders confident in their systems’ resilience against cyber threats. Research from…