Case Studies

Microsoft on fake social media and elections

by Mark Rowe

China is using fake social media accounts to poll voters on what divides them most to sow division and possibly influence the outcome of the United States presidential election in its favour, according to a Microsoft blog.

China has also increased its use of AI-generated content to further its goals, says a report by the Microsoft Threat Analysis Center (MTAC) – titled, Same targets, new playbooks: East Asia threat actors employ unique methods. The report says that the Taiwanese presidential election in January saw a surge in the use of AI-generated content to augment influence operations (IO) by Chinese Communist Party-affiliated actors; and that this was the first time that Microsoft Threat Intelligence witnessed a nation-state actor using AI content in attempts to influence a foreign election.

Meanwhile, according to MTAC, North Korea continues to prioritise the theft of cryptocurrency funds, conducting software supply-chain attacks and targeting their perceived national security adversaries. Despite the chances of such content in affecting election results remaining low, China’s increasing experimentation in augmenting memes, videos, and audio will likely continue, MTAC concluded.

Comment

That China is allegedly aiming to disrupt elections by leveraging AI-generated content to spread disinformation shouldn’t come as a shock, says Adam Marrè, Chief Information Security Officer, at the cyber firm Arctic Wolf. He says: “China has consistently shown the ability to exploit cybercrime tactics to further their economic and political goals. Thus, using AI to disrupt or influence elections is a logical extension to Chinese state craft as we saw with their recent details regarding iSoon and APT31.

“This development is a stark confirmation of an unsettling reality we’ve been aware of for some time now: the current worldwide election cycle is poised to be an unprecedented battleground, facing the widespread proliferation of artificial intelligence tools capable of generating and disseminating counterfeit content—a challenge for which we are alarmingly unprepared. In the realm of election cybersecurity, a survey reveals that over half of the leaders in US state and local governments rank misinformation campaigns as their top concern. Given the global scale of elections this year, including pivotal votes in the US, UK, India, and South Korea, the strategic interest of nations like China in destabilizing or influencing major democratic exercises cannot be overlooked.

“Cybersecurity professionals have long sounded the alarm on the risks posed by AI-fabricated content and its potential to sway voter opinions, a threat magnified by the capacity of adversaries, notably China, to propagate misleading and manipulated information through sophisticated mis- and disinformation campaigns. The capability of these actors to craft convincing content and rapidly spread it across digital platforms underlines an urgent need: the elevation of our collective cybersecurity vigilance to be able to better vet our sources of information and identify false content, the enhancement of defensive measures, and the expansion of cybersecurity education among our citizens. Such steps are critical to fortify our defenses against these emerging threats as we approach significant electoral milestones.”

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing