Case Studies

NCSC Annual Review on ‘state-aligned actors’

by Mark Rowe

The threat to the nation’s most critical infrastructure is ‘enduring and significant’, amid a rise of state-aligned groups, according to the UK official National Cyber Security Centre (NCSC) in its latest Annual Review. The NCSC, part of the UK Government agency GCHQ, stressed a need to keep pace with the changing threat, particularly in relation to cyber resilience in the UK’s critical national infrastructure (CNI): utilities that provide the country with safe drinking water and electricity, communications, transport and financial networks, and internet connectivity.

Over the past 12 months, the NCSC reports that it has observed the emergence of a new class of cyber adversary in the form of state-aligned actors, who are often sympathetic to Russia’s further invasion of Ukraine and are ideologically, rather than financially, motivated.

In May, the NCSC issued a joint advisory revealing details of ‘Snake’ malware, which has been a core component in Russian espionage operations carried out by Russia’s Federal Security Service (FSB) for nearly two decades.

NCSC CEO Lindy Cameron said: “The last year has seen a significant evolution in the cyber threat to the UK – not least because of Russia’s ongoing invasion of Ukraine but also from the availability and capability of emerging tech.

“As our Annual Review shows, the NCSC and our partners have supported government, the public and private sector, citizens, and organisations of all sizes across the UK to raise awareness of the cyber threats and improve our collective resilience.

“Beyond the present challenges, we are very aware of the threats on the horizon, including rapid advancements in tech and the growing market for cyber capabilities. We are committed to facing those head on and keeping the UK at the forefront of cyber security.”

The NCSC notes a new trend of malicious actors targeting the personal email accounts of high-profile and influential individuals involved in politics; to specifically target people who the attackers think hold information of interest. Personal as opposed to corporate accounts are being targeted, as security is less likely to be managed in depth, the NCSC says. Democratic elections almost certainly represent attractive targets for malicious actors, the NCSC adds.

Comments

AI-powered cyber warfare is unfortunately in its heyday, said David Critchley, Regional Manager UK and Ireland at Armis. “This drastic change in boldness has brought cyberwarfare out from the shadows into the open – arguably flaunted by threat actors and nation-states – with seriously ill-intent.”

Deryck Mitchelson, Check Point Software’s Global CISO and Government Advisor, said: “We continue to see an increase in successful cyberattacks against CNI, which are uniquely placed due to their scale, complexity and the part they play in underpinning the functioning of our country. Critical infrastructure and services, which ranked the sixth most targeted industry in Check Point’s mid-year report, has seen a dramatic 26 per cent increase in ransomware attacks in the past year.”

Dr Ilia Kolochenko, founder of ImmuniWeb, says: “Freely accessible online tools that leverage generative AI for content creation, such as ChatGPT, may indeed accelerate, simplify and eventually amplify interference with elections by forming, influencing or eroding public opinion. For example, it is utterly simple to take truthful text, say a biography of a political candidate, and ask AI chatbot to criticise it for inaccuracies or possible false statements and then disseminate it on social networks. Some LLMs (large language models) allow the creation of high-quality pictures and videos that may be misused to ridicule or satirise political opponents. Worse, such technologies can create deep-fake photos and videos, trying to discredit a political party or its candidates. Social networks should urgently implement security mechanisms that would detect AI-generated content and conspicuously mark it as is. Additionally, social network users may be prohibited – by the terms of service – from using AI-created content without adding a visible disclaimer or having their account suspended. Otherwise, 90 per cent of online content on social networks risks being toxic AI-generated misinformation aimed at indoctrinating and brainwashing voters.”

Mark Jow, Technical Evangelist at the cyber firm Gigamon, picked up from the report that ransomware continues to be one of the greatest cyber threats, as seen recently in the UK in the cases of South Staffordshire Water, Royal Mail International and NHS 111. “Ransomware strains such as Snake are regularly upgraded to avoid detection, and the current version persistently seems to evade detection.

 “Visibility is therefore imperative for organisations having to keep pace with such change, as how can they manage and protect against something that they simply cannot see. In addition to the constant evolution of ransomware, rapid advancements in technology such as modern hybrid cloud networks are adding an extra layer of complexity to this already difficult challenge, leading to even more blind spots and unseen weaknesses that traditional security and monitoring tools might overlook. With over 90pc of malware attacks using encryption to evade detection and only 30% of IT and security professionals having visibility into encrypted traffic, organisations need to remain alert to any suspicious activity in their systems. Achieving true, deep observability is the key to organisations fortifying their defences to prepare themselves against an evolving threat landscape. Gaining oversight into suspicious traffic empowers threat detection, informs overall security posture management efforts, and forms the foundation for a ‘Zero Trust’ strategy.”

Christian Borst, EMEA CTO at Vectra AI, said that the NCSC is right to focus on ransomware. “With advanced threat actors using built-in admin tools to evade detection once they’ve breached CNI networks, it’s vital to spot the signs of malicious activity as soon as they occur. If not, we’ll continue to see major disruptions throughout 2024.

“CNI firms must focus on improving their threat detection and response capabilities to spot the signs of an attack as soon as possible. This means reducing the noise they face from security alerts, and achieving clearer attack signal so security teams can accurately and reliably prioritise threats, and stop attacks before they can cause any disruption.”

On elections

Tim Ayling, Vice President EMEA at the cyber firm Imperva warned that an election will be rich pickings for AI-created, hyper-realistic bots to spread disinformation and deepfakes. “With generative AI greatly enhancing amateur actors’ ability to create convincing deepfakes, managing the flood of false stories will be a huge challenge.

“Government must heed the NCSC’s advice now and start preparing for an election bot takeover. In 2022, nearly half of all internet traffic came from bots, and there is no doubt that this number will only increase. Bad bots are difficult to legislate against; the internet is borderless and so any country-specific law will be difficult to enforce if the bot operator is based in a different jurisdiction. Therefore, it’s critical that on the run up to the election the Government takes bot management seriously. This means using machine learning and proactive monitoring to spot any anomalies in web traffic to block any malicious or unwanted bots.”

As for election interference, Eduardo Azanza, CEO at Veridas, called for education users in the same way as for other types of scams. “A deepfake video usually contains inconsistencies that become evident when a face or body moves. An ear, for example, may have certain irregularities, or the iris doesn’t show the natural reflection of light.

“Also, technologies with AI techniques such as biometrics, can also be used to detect deepfakes and protect the integrity of elections. Anti-spoofing techniques within biometrics can be used to spot differentiators between synthetic and authentic voices. Furthermore, having multi-factor authentication processes, which include voice biometrics and facial recognitions, make it much harder to impersonate politicians or electorate spokespeople.”

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing