Interviews

Review of 2023: AI

by Mark Rowe

Artificial Intelligence (AI) has been the buzzword of 2023.

Whether it is with excitement or fear, the world is anticipating the future of AI, says Andrew Murphy, Senior Director, OpenText Cybersecurity. He says: “According to Verizon’s recent State of Small Business survey, nearly half of SMBs are exploring AI solutions despite being worried about cybersecurity risks and integration concerns. In 2024, I predict a new buzzword: AI Readiness. Organisations of all sizes will look to channel partners for guidance in their AI journey. What framework should we use to establish AI policies? How will AI introduce unintentional biases? If we connect data across more environments, what new security risks arise? Channel partners won’t just be asked about AI technologies and processes, but also for help with policies to govern AI in the workplace. I believe by investing in their AI practices today, partners will be better prepared to help customers with the next wave of AI disruption in 2024.”

In the context of security, as elsewhere, the impact of AI is likely to be profound. According to CyberArk’s 2023 Identity Security Threat Landscape Report almost nine in ten UK cybersecurity teams (88pc) are embracing Artificial Intelligence (AI), with many already making use of the tech to triage minor threats. Generative AI (GenAI) in particular is already being used to identify behavioural anomalies faster for cyber resilience.

Use of AI creates an explosion of machine identities that malicious actors can exploit to get access to confidential data and controls, says David Higgins, senior director, Field Technology Office at CyberArk. He says: “Cybersecurity teams have to tread extremely carefully in their dealings with AI. Balancing the undoubted benefits it brings with the sizeable risks it creates is no simple tasks.

“Establishing AI-specific company guidelines, publishing usage policies and updating employee cybersecurity training curricula is a must. Due diligence is needed before any AI-led tools are introduced, as that’s the most effective way to mitigate risk and reduce vulnerabilities. Without appropriate identity security controls and malware-agnostic defences, it quickly becomes difficult to contain innovative threats in high volumes that can compromise credentials en route to accessing sensitive data and assets.”

Opinions differ on whether AI is tilting the scales in favour of defenders or not.

AI will empower organisations to defend at machine speed, believes Vasu Jakkal, CVP, Microsoft Security. “At Microsoft, we are privileged to have a leading role in advancing AI innovation, and we are so grateful to our incredible ecosystem of partners, whose mission-driven work is critical to helping customers secure their organisations and confidently bring the many benefits of AI into their environments,” said Vasu Jakkal, CVP, Microsoft Security.

Others, such as Mario Duarte, VP of Security at the cyber firm Snowflake reckon that AI will be a huge boon to cybercriminals before it becomes a help to security teams.  He says: “Cybercriminals and bad actors will benefit from the widespread deployment of advanced AI tools before their targets can set up AI in their own defence. A lot of businesses are cautious about adopting and using new technologies — there’s cost, regulatory requirements, reputational risk, and more at stake if it’s done poorly. However, bad actors won’t wait. For example, phishing is still a big deal, and most phishing emails are pretty clumsy and dumb. Generative AI will make this already effective attack vector even more successful. They’ll have the full firepower of large language models and generative AI, and defenders will be playing catch-up. Eventually the playing field will even out, but I expect a lot of pain in the meantime.”

The role artificial intelligence will play in enhancing threat detection and response is undeniable, yet it is crucial to recognise that technology alone cannot protect businesses against modern threats, according to Dan Schiappa, chief product officer at the cyber firm Arctic Wolf. He says: “As threat actors become more advanced, and leverage AI tools themselves, humans will have an essential role investigating novel attacks, explaining their context within their business, and most importantly, leveraging their knowledge and expertise to train the very AI and machine learning models that will become deeply embedded within next-generation cybersecurity solutions.” 

AI may create more work for CISOs, suggests Marie Wilcox, Security Evangelist at Panaseer. She says: “CISOs grappling with the impact of AI in 2024 have a lot to consider, and while they will be looking into tools and solutions that leverage the technology to support their security strategy and automate certain security controls, the primary consideration of any good CISO is to be prepared for risk. These leaders are already well aware of how AI enhances the threat landscape: three in four security leaders report being concerned about threat actors using AI to find gaps in their organizations’ security controls, but this isn’t the only concern. As these leaders plan for the year ahead, many are focused on how their security teams can keep up with the fast-moving world of AI-supported IT development.”

To recap, ChatGPT is a chatbot that uses large language models to generate natural and engaging conversations based on user prompts. ChatGPT was created by OpenAI. A free research preview was launched on November 30, 2022. One year on, Andy Patel, Security Researcher at WithSecure, recalled that as the moment the public became aware of the power of modern generative AI. “Since then, access to large language models has become a lot cheaper, especially with the launch of less compute-hungry open-source versions.

“And while a vocal few continue to doom monger about existential threats from hypothetical far-future artificial superintelligences, the fact is we should be more concerned with how humans will abuse these tools in the short-term.And that’s especially the case when considering disinformation, something we demonstrated in our early 2023 publication, Creatively Malicious Prompt Engineering.

“Examples of human misuse of language models, especially in the field of disinformation, are still mostly just academic. However, large language models may be contributing to disinformation far more than we are aware of, especially when considering short-form content that is commonly posted on social media sites.

“The potential for a flood of AI-driven disinformation is there, especially since social networks have all but killed off their content moderation efforts. We haven’t seen it yet, but we’re likely to see it soon. And I wouldn’t be surprised if adversaries are ramping up their capabilities ahead of 2024, a big election year.”

The cybersecurity risks and rewards of artificial intelligence can be clearly seen in what’s happened since the release of ChatGPT, the world’s first widely available generative-AI tool. One year on, for many of us it’s hard to imagine life without generative-AI, says Fleming Shi, CTO at Barracuda. “Tools such as ChatGPT, Bing and others offer immense benefits in terms of the time and effort saved on everyday online tasks – but there’s more to generative AI than doing your homework in 30 seconds.

“The security risks of gen-AI are widely reported. For example, the LLMs (large language models) that underpin them are built from vast volumes of data and they can be distorted by the nature of that data. The sheer volume of information ingested carries privacy and data protection concerns. At the moment, regulatory controls and policy guardrails are trailing in the wake of the gen-AI’s development and applications.

“Other risks include attacker abuse of gen-AI capability. Generative AI allows attackers to strike faster, with better accuracy, minimizing spelling errors and grammar issues that have acted as signals for phishing attacks. This makes attacks more evasive and convincing. As attackers become more efficient, it is even more critical that businesses use AI-based threat detection to outsmart targeted attacks.”

Another opportunity for generative AI is in enhancing cybersecurity training – basing it on actual attacks, and therefore more real, personal, timely and engaging than the mandatory, periodic, and simulated awareness sessions, he adds. “At Barracuda, we’re building functionality that uses generative AI to educate users when a real-world cyber threat, such as a malicious link is detected in an email they’ve received. We believe this will provide an impromptu opportunity to train the user if they fall for the attack when clicking through the malicious link, ultimately increasing the firepower against threats and changing user behaviour when faced with risk. In many ways, it’s a way to build immunity and awareness against real threats in the parallel universe of cyberspace.”

The impact of AI – including, but not limited to generative-AI – on the cyberthreat landscape will become ever more pervasive, he prredicts. “Attackers are already leveraging advanced AI algorithms to automate their attack processes, making them more efficient, scalable, and difficult to detect. These AI-driven attacks can adapt in real time, learning from the defenses they encounter and finding innovative ways to bypass them. Ransomware attacks are evolving into more targeted campaigns as cybercriminals focus on critical infrastructure and high-value targets, aiming to inflict maximum damage and, in turn, demand exorbitant ransoms.

“We can’t put the AI genie back in the bottle – but nor should we want to. What we need to do is harness its power for good.”

Related News

  • Interviews

    Reckless IT disclosure

    by Mark Rowe

    BCS, The Chartered Institute for IT, is now urging Government to fully explore whether reckless disclosure of the public’s data should be…

  • Interviews

    Cameron on corruption

    by Mark Rowe

    Corruption is one of the greatest enemies of progress in our time; yet when it comes to tackling corruption, the international community…

  • Interviews

    Scots on metal theft

    by Mark Rowe

    More proposals for tackling metal theft in Scotland have been published by the Scottish Government. The SNP government seeks views on introducing…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing