TESTIMONIALS

โ€œReceived the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.โ€

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Case Studies

AI cyber security code

by Mark Rowe

Codes about the protecting of AI (artificial intelligence) models from hacking and sabotage have been unveiled by the UK Government.

During a speech at CYBERUK, the UK Governmentโ€™s flagship annual cyber security conference in Birmingham, Technology Minister Saqib Bhatti announced two new codes of practice for developers to work on cyber security in AI models and software.

The codes set out requirements for developers to make their products resilient against tampering, hacking, and sabotage.

In the last 12 months, half of businesses (50pc) and a third of charities (32pc) reported cyber breaches or attacks, according to the UK official Cyber Security Breaches Survey 2024; phishing remained the most common type of breach. The codes show developers how software can be built in a secure way, with the aim of preventing attacks such as the one on the MoveIT software in 2023 which compromised sensitive data globally.

Minister at the Department for Science, Innovation and Technology (DSIT) Saqib Bhatti said: “We have always been clear that to harness the enormous potential of the digital economy, we need to foster a safe environment for it to grow and develop. This is precisely what we are doing with these new measures, which will help make AI models resilient from the design phase.

“Todayโ€™s report shows not only are we making our economy more resilient to attacks, but also bringing prosperity and opportunities to UK citizens up and down the country. It is fantastic to see such robust growth in the industry, helping us cement the UKโ€™s position as a global leader in cyber security as we remain committed to foster the safe and sustainable development of the digital economy.”

The UK authorities say that the AI cyber security code is intended to form the basis of a global standard.

See also on the NCSC website, National Cyber Security Centre (NCSC) chief Felicity Oswald’s day one speech. She spoke of NCSC optimism ‘that the net benefit of AI cyber security will far outstretch any adversaryโ€™s gain in their offensive capability, whether that is through fixing code or detecting intrusions. But we should not sit on our laurels.’ Meanwhile in a speech yesterday, the NCSC’s the Chief Technology Officer Ollie Whitehouse said that the market for building secure, resilient technology isnโ€™t working, as the market does not incentivise the companies to do so.

Comments

Kevin Curran, engineering professional institute IEEE senior member and professor of cybersecurity at the University of Ulster says: โ€œUnderstanding how GenAI systems arrive at their outputs can be difficult. This lack of transparency means it can hard to identify and address potential biases or security risks. GenAI systems are particularly vulnerable to data poisoning and model theft. If companies cannot explain how their GenAI systems work or how they have reached their conclusions, it can raise concerns about accountability and make it difficult to identify and address other potential risks.

โ€œTo mitigate this, organisations should consult with data protection experts and keep abreast of regulatory changes and develop a more robust security strategy. This approach helps not only in avoiding legal pitfalls but also in maintaining consumer trust by upholding ethical AI practices and ensuring data integrity. Other best practices include minimising and anonymising data use, establishing robust data governance policies, conducting regular audits and impact assessments, securing data environments, and reminding staff of current security protocols.

โ€œMoving forwards, businesses need to stay ahead of potential threats. The threat landscape is constantly evolving, so organisations need to keep pace and ensure that they regularly reviewing and upgrading their defences. Some approaches that worked just a few years ago are now obsolete and given how rapidly artificial intelligence has been rolled out in recent months, enterprises must adopt more comprehensive data protection strategies and tools to secure their systems.โ€

And Dr Darren Williams, CEO and founder of anti-ransomware platform Blackfog, welcomed the NCSC and insurers’ initiative. He said: “Data exfiltration is a key component of most ransomware attacks, now at the core of 90 per cent of all attacks, as it allows the attackers to extort their victims even after they restore their systems. Paying ransoms only fuels the cybercriminals’ operations and does not guarantee the recovery or deletion of the stolen data, so this effort to reduce ransomware payments demonstrates a dedication to a more secure cyber future, requiring all participants to exhibit resilience, adaptability, and constant vigilance.โ€

Related News