Cyber

What’s new about AI

by Mark Rowe

John Linford, Director, The Open Group Security Forum and Open Trusted Technology Forum discusses what artificial intelligence (AI) will (and won’t) change about security.

When thinking about how new technology comes into being, it is easy to assume that the tools we use were created specifically for the problems we apply them to.

In fact, while this is sometimes the case, technological development in the digital age is at least as likely to follow a pathway of creation, application, and exploitation: that is, it is often only after new tools are developed that people and businesses set about discovering what new possibilities they open up and turning those possibilities into new processes or revenue opportunities.

This is a big reason why modern technological progress can feel so unpredictable. The founding designers of the internet, for example, could hardly have imagined the diversity of uses we have since found for this global connectivity, much less the consequences for how we work and think that it is still having.

It is also a big reason why keeping pace with the security implications of technological change is so fraught with challenge. In the latest major wave of innovation – the intense investments being made into artificial intelligence, spurred by the emergence of more advanced, more freely available c – we have already seen significant use cases arise for cyber criminals.

Easy generation of convincingly human-like text output threatens to supercharge phishing and social engineering tactics. AI can automate both the generation of malware and the management of botnets. In recent work, researchers have shown how generative AI might render ID checks and facial recognition systems “effectively useless”.

It is important to remember, amidst all this, that the threats we imagine or even experience first might not be representative of the security challenges that a technology will pose tomorrow. We are still very much in the explorative ‘application’ phase of AI, and criminals will undoubtedly innovate as creatively as businesses and other legitimate users do.

That does not mean, however, that security professionals should feel they are without solid ground to stand on as AI-empowered threats rise in scope and significance. The nature of the risks, looking forward, may be unpredictable, but much of what we are seeking to protect will remain the same – and that provides a very solid basis to work from.

A big part of the nature of the task, in fact, stems from those founding moments of the internet, in the design of the internet protocol suite which underpins communication between devices over large, unmapped networks. It is based on a simple, adaptable logic in which local resources are made visible to the outside world and connections between machines are established before the machines share any information about who the user might be or why they are seeking access to that resource.

This minimalist design makes the internet protocol suite highly flexible and resilient, which was vital to its initial success when networking was still experimental. This visibility can also, however, offer attackers a large target and provide them with a lot of leeway to move through systems if and when they do break through security barriers.

A ‘connect first, ask questions later’ assumption is why even small security failures can have enormous consequences – and that’s equally true whether the attacker is using traditional techniques, automating their work through machine learning, or tailoring their phishing attempts with generative AI.

It is also an assumption which a Zero Trust methodology, when it is well implemented upends. Zero Trust, of course, has itself become something of a buzzword in the cybersecurity space over the last decade or so, but like AI that is precisely because it brings something new to the table which we are still learning how to best apply.

By establishing policy enforcement points between users and the resources they need to access, a Zero Trust approach means that networks can ask detailed, granular questions before establishing a full connection.

These questions can go well beyond confirming credentials like passwords and restrict access on the basis of things like the user’s typical behaviors and whether the machine being used has accessed the network before. By asking such questions at every point along the user’s journey, the scope for attackers to leverage small break-ins to create big impacts is effectively mitigated.

While AI will not inherently change what cybersecurity teams have to protect, or the value of the best approaches available for doing so, it will come to effectively augment our defensive practices. In Zero Trust, making good calls on whether or not to authenticate access relies on analyzing large amounts of complex data in ways that AI technology is perfectly suited for.

The first step, though, is to establish a baseline of Zero Trust capabilities, on the basis of standards work like The Open Group Zero Trust Commandments, to build strong fundamentals that are fit for a still-unpredictable future of AI-powered cyber attacks.

Related News

  • Cyber

    Cyber poll

    by Mark Rowe

    A Twitter poll found that 35 per cent of cyber security professionals cited employee burnout as the most concerning issue among increasing…

  • Cyber

    Cloud brokers

    by Mark Rowe

    It’s no secret that cloud apps like Office 365, Salesforce and Box are the future of enterprise computing, yet security concerns continue…

  • Cyber

    Tips for small enterprises

    by Mark Rowe

    Don’t use a sledgehammer to crack a nut, suggests David Emm, pictured, Principal Security Researcher at the cyber firm Kaspersky. The attacks…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing