Vertical Markets

Online Safety Bill against disinformation

by Mark Rowe

The UK Government says that it’s strengthening its proposed new internet safety protections for social media firms to identify and root out state-backed disinformation.

The DCMS singled out Russia among others carrying out state-sponsored disinformation and said that the Government will table an amendment to link the National Security Bill with the Online Safety Bill whereby social media platforms will have to tackle malicious state-linked disinformation. That can be fake accounts set up by individuals or groups acting on behalf of foreign states to influence democratic or legal processes, such as elections and court proceedings, or the spreading of hacked information.

A new Foreign Interference Offence created by the National Security Bill will be added to the list of priority offences in the Online Safety Bill. Platforms would need to do risk assessments for content as illegal under a Foreign Interference Offence. The regulator Ofcom will have the power to fine companies act up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.

Digital Secretary Nadine Dorries said: “The invasion of Ukraine has yet again shown how readily Russia can and will weaponise social media to spread disinformation and lies about its barbaric actions, often targeting the very victims of its aggression. We cannot allow foreign states or their puppets to use the internet to conduct hostile online warfare unimpeded.”


Matthew Gracey McMinn, Head of Threat Research at Netacea said: “Social media sites already have an ethical duty to take down misinformation, this can often seem futile. To truly understand the scale at which political bots operate, you only need to look at recent events of global importance, such as the US elections.

“Researchers at Comparitech discovered a Facebook bot farm which had been used to create and manage 13,775 Facebook accounts. These had a total of 206,625 posts within a single month. It was clear that these bots had been used for political manipulation as their research identified the most used keywords in these posts were “Trump”, followed closely by “Biden” and “Covid”.

“Twitter has similar issues — at one point it notified 700,000 users who had encountered accounts that had been linked to Russia and the Internet Research Agency (IRA), a Russian agency for influence operations. For example, Twitter discovered 3,814 accounts linked to the IRA, who had posted over 176,000 tweets and over 50,000 accounts linked to the Russian government which had tweeted over a million posts, some of these posts can still be seen on the platform in 2021.

“These are only a handful of instances. Of Twitter and Facebook’s 3 billion+ users it’s uncertain how many of these accounts are legitimate. In addition, as these bots grow in sophistication they will not be so easy to detect. Future bots will make greater use of machine learning, producing more human-like language that will help embed themselves within human groups.

“Although this regulation is a step in the right direction, encouraging social media companies to invest in the right tools to tackle this issue is a step in the right direction, yet it is unlikely that they’ll be able to make any meaningful impact on bots in the immediate future. Social media sites will need to invest in AI to track and remove malicious actors and content. The NCSC recently stated that the UK should expect a long campaign of cyberattacks from Russia, it’s expected that misinformation bots will only increase in aggression.”

Related News


Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing