Interviews

AI and cyber in 2024

by Mark Rowe

AI will be on both sides of security attacks, says Aaron Rosenmund, Senior Director of Security and GenAI Skills at the tech learning company Pluralsight, in a preview of 2024.

Cybersecurity is a never-ending game of cat and mouse. As technology advances, attackers find new ways to exploit vulnerabilities, which are then patched and the cycle repeats. Artificial intelligence has added a whole new level of complexity as the barrier to entry for creating malicious code is now much lower. Next year cyber teams will have to keep their wits about them as the threat grows. Here’s what they should look out for.

GenAI and training

Generative AI is increasing the ability of cyber threat actors to leverage social engineering in more effective ways. In particular, it becomes easier to target people in languages other than English which should worry people.

In addition, the tactics used by ransomware actors are increasing in sophistication. This increases the impact of the initial breach, leaving the rest of the cyber kill chain as points where the attackers can be detected and stopped. Defenders need to learn from these post initial access attacks, understand how they work, and how they are being used as the common playbook in all of the most recent breaches.

Traditional passwords for now

Unfortunately, we still have many years of using traditional passwords ahead of us, so don’t expect to ditch password managers just yet. Don’t get me wrong, we have seen massive adoption of multi-factor authentication and single sign-on products from Microsoft, Cloudflare, Okta and others. Plus, the recent move to passkeys instead of passwords is welcome. But, the baseline approach to accessing websites, apps and services still often starts with a text password, even if combined with biometrics or a hardware token. We need to move faster to move on from passwords and IT and security teams should work with business leaders to remove passwords wherever possible.

Deepfakes more than ever

We can’t base our defences on current technology when AI is advancing so fast. Though public and corporate entities will held accountable for telling us when they use AI, through things like digital watermarks, criminals will not.

It is highly likely we will see the rise of criminal deepfake as a service to scam access to systems via social engineering. The defence here is not a technical one but rather a change in processes and mindset. We must now assume that deepfakes will be undetectable and that we can no longer rely on authentication techniques that can be duped. The public will need to change their mindset to realize they cannot trust anything that is presented to them, to an extent  we have not yet seen.

End-to-end encryption and authentication

The lines between mobile device systems and standard desktop operating systems will continue to blur. Currently, they are distinct operating systems and most threat actor tradecraft attacks the traditional desktop targets. But as the operating systems responsible for these systems become the same as those running our mobile devices, we will see an interesting mix of old tactics bleeding into a “new” device medium.

However, the tipping point for this won’t be in 2024. For now, there are less attackers who have the capability to operate fully in the mobile space due to the various operating systems and devices. As far as messaging and encryption, all messaging, calls, and communication of any kind should only be accessible to the originally intended audience – period.

End-to-end encryption is a must, and with the power of computers we have now, there is no limitation from a hardware perspective. Inherent privacy is an expectation, one that should be respected in the same way free speech is. This will run up again governments, as we have seen in the courts, but any backdoors invalidate the whole point of end-to-end encryption.

AI should be a boon to both sides of the security landscape, but whether it will enable attacks is up for debate. Mostly, we see phishing emails that don’t have the tell-tale signs of bad grammar, and misspelt words. To be honest, we shouldn’t have been relying on the English competency of the attackers as our main deterrent, so this evolution shouldn’t really substantially change the threat landscape, or at least the impact.

One area that is more dangerous is the use of AI to assist in the development of malware. Software development is an area where AI has been assistive and productive early on. Malware is just software with different intent.

So, it is natural that this rising tide will lift the capabilities of malware developers along with everyone else. However, it is a matter of evenly distributing capabilities across teams, not necessarily creating new attacks or exploits.

If we can properly leverage AI in the same manner from a defensive perspective, we should be able to turn the tables and make it very difficult for malicious groups to operate without significant investment into newly created techniques that the assistive AI technologies can’t really help with.

Related News

  • Interviews

    ABI appointment

    by Mark Rowe

    The Association of British Investigators (ABI), established in 1913, is a members’ body representing investigators in the private sector. With the retirement…

  • Interviews

    Experience versus academia

    by Mark Rowe

    Experience versus academic qualifications in security management is a matter of striking the ideal balance, suggests Ross Harvey. He’s the consultant who…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing