Cyber

AI arms race

by Mark Rowe

Machine learning will likely be equally effective for offensive and defensive purposes (in cyber and kinetic theatres), and hence one may envision an “AI arms race” eventually arising. That’s according to a study produced by the cyber product company F-Secure and its academic and other partners in SHERPA – an EU-funded project founded in 2018 by 11 bodies from six countries. Briefly, it’s considering the ethical dimensions of smart information systems (SIS).

The authors warn that to remain competitive, companies may forgo ethical principles, ignore safety concerns, or abandon robustness guidelines, to push the boundaries of their work, or to ship a product ahead of a competitor. This trend towards low quality, fast time-to-market is already prevalent in the Internet of Things: “Similar recklessness in the AI space could be equally negatively impactful.”

It says in its summary: “Machine-learning-powered systems will also affect societal structure with labour displacement, privacy erosion, and monopolisation (larger companies that have the resources to fund research in the field will gain exponential advantages over their competitors). The capabilities of machine learning systems are often difficult for the lay person to grasp. Some humans naively equate machine intelligence with human intelligence. As such, people sometimes attempt to solve problems that simply cannot (or should not) be solved with machine learning.

“Even knowledgeable practitioners inadvertently build systems that exhibit social bias due to the nature of the training data used. The first section of this report details common errors made while deploying and also designing and training machine learning models, provides some recommendations to avoid such pitfalls, and concludes with a discussion of the ethical implications of badly designed Smart Information Systems. Data analysis and machine learning methods are powerful tools that can be used for both benign and malicious purposes.”

The report looks at a number of primarily potential malicious uses of artificial intelligence, including intelligent automation, analytics, disinformation and fake news, phishing and spam, synthesis of audio, visual, and text content, and obfuscation; as AI is used by search engines, social media companies and recommendation websites.

“As artificial-intelligence-powered systems become more prevalent, it is natural to assume that adversaries will learn how to attack them. Indeed, some machine-learning-based systems in the real world have been under attack for years already.: The report provides step by-step details of a number of popular attacks against machine-learning-based systems, and provides examples of how these attacks might be used maliciously; and related ethical issues.

According to the report, adversarial attacks against machine learning models are hard to defend against because there are very many ways for attackers to force models into producing incorrect outputs.

Comment

SHERPA Project Coordinator Professor Bernd Stahl from De Montfort University in Leicester says: “Our project’s aim is to understand ethical and human rights consequences of AI and big data analytics to help develop ways of addressing these. This work has to be based on a sound understanding of technical capabilities as well as vulnerabilities, a crucial area of expertise which F-Secure contributes to the consortium. We can’t have meaningful conversations about human rights, privacy, or ethics in AI without considering cyber security. And as a trustworthy source of security knowledge, F-Secure’s contributions are a central part of the project.”

Related News

  • Cyber

    Are you asleep at the wheel?

    by Mark Rowe

    UK organisations are failing to make progress towards strong cybersecurity and are facing paralysis as cybercriminals become more advanced. So suggests the…

  • Cyber

    Police cyber spends

    by Mark Rowe

    UK police forces have spent a total of £1.3m on cybercrime training courses in the last three years, according to a new…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing