Interviews

Fighting the good fight

by Mark Rowe

Ramprakash Ramamoorthy, pictured, director of research at the IT security company ManageEngine, considers how to balance AI-enabled security with ethical practice.

Keeping up with threats is an ongoing concern in the constantly changing field of cybersecurity. At this year’s World Economic Forum (WEF) in Davos, Mary Callahan Erdoes of JPMorgan Chase & Co. warned attendees that fraudsters “are getting smarter, savvier, quicker, more devious, and more mischievous”. As a result, organisations must future-proof their cybersecurity to navigate the ever-changing and dangerous scenarios they face.

At the same time, they must ensure they do so in an ethical and fair manner, particularly where AI-enabled defences come into play. It’s important to avoid the sense that the ends justify any means. As AI technology puts pressure on our ethical frameworks as well as our cybersecurity measures, organisations need to ensure they develop a balanced approach that both protects their data and preserves good practices.

Security threats on the cyber frontier

It’s no wonder that these concerns were top-of-mind at the World Economic Forum: Evolutions in blockchain, encryption, and artificial intelligence are rewriting the rules of engagement. In particular, AI is speeding up the rate of change; AI models can determine and exploit weaknesses and vulnerabilities far faster than a human hacker ever could, and defences that would previously have remained effective for months or years might now be bypassed in days. Many AI-enabled attacks can self-refine based on their failures, making it more challenging for defenders to reliably counter them.

The defence tech fighting back

The rate of change driven by AI cuts both ways. As the old saying goes, fight fire with fire–just as organisations now have the unenviable task of countering AI threats, they also have the option to integrate AI into their defences, providing a faster, more intelligent, more flexible response to ever-changing threats. For example, AI and digital twins, which serve as powerful simulation engines, can be fused together to create advanced security systems that can adapt to fast-evolving attack types.

This integrated approach is increasingly crucial for an effective defence. Cross-channel attacks are the norm, where malicious code enters via one point–an email link, for example–and then rapidly proliferates across the target’s entire IT landscape, moving through web apps, data storage, and beyond. As such, joined defences that link information from across the organisation network helps spot, contain, and neutralise attacks quickly, before they can spread and cause greater damage.

These innovations are also highly important given the growing shortage of skills in cybersecurity. The defence teams at most organisations are overstretched, meaning that the edge an AI-enabled system can provide is crucial. A high-capacity, automated defence can massively boost resources, enabling the security team to sift through alerts much faster, deal with routine threats quickly, and only call the team’s attention to high-priority issues that need direct intervention.

A balance between innovation and ethical responsibility

In all this, organisations must act to ensure the trustworthiness and fairness of their AI-driven security measures. As new methodologies and technologies become commonplace, IT and security teams must not underestimate the importance of moving towards security practices that can address dynamic and evolving threats instead of relying on traditional security methods that utilise static rules and cannot address zero-day attacks or advanced threats.

For example, AI models are trained on vast amounts of user data. This needs to be handled ethically to ensure anonymity and avoid any potential misuse or leaks of personally identifiable information. It’s also important to ensure that AI models are built and refined with anti-bias controls in place. The inherent bias of human engineers and the inadvertent bias of training datasets can lead to inaccurate or unethical determinations on the part of AI-enabled security systems, including how they treat end users and what activities they decide are suspicious.

As a result, it’s crucial that the drive to implement effective defences doesn’t push ethical concerns out of the spotlight. All AI-enabled technology needs to be carefully created and controlled. In short, organisations shouldn’t equate a quick fix with a good one.

Related News

  • Interviews

    Threat of ransomware

    by Mark Rowe

    Ransomware attacks are never far from the headlines and that’s likely to remain the status quo for the foreseeable future. Indeed, Verizon’s…

  • Interviews

    Arena trauma report

    by Mark Rowe

    Some three quarters (75pc) of children and young people affected by the May 2017 Manchester Arena (pictured) terrorist attack were psychologically injured…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing