Online activity, including social media recommendation algorithms amplifying harmful and misleading content, played a key part in driving the unrest and violence after the Southport murders in midsummer 2024, according to a committee of MPs. Social media companies’ responses to Southport and the violent aftermath were inconsistent and inadequate, often enabling, if not encouraging, this viral spread, said the Science, Innovation and Technology Committee of MPs in a report.
They pointed to evidence that the tech firms may have profited ‘due to the heightened engagement’. The report went on: “The evidence supports the conclusion that social media business models incentivise the spread of content that is damaging and dangerous, and did so in a manner that endangered public safety in the hours and days following the Southport murders.”
When the committee put this to X, Meta and TikTok, they all denied profiting from the unrest.
As for regulation of online space, much of the misleading or harmful content that drove the unrest last summer would not have been covered by the Online Safety Act even if it had been in full force, according to the report. The media regulator Ofcom confirmed to the committee that the Act is not designed to tackle the spread of “legal but harmful” content such as misinformation. The report called on platforms to ‘algorithmically demote fact-checked misinformation, with established processes setting out more stringent measures to take during crises’.
Comments
Among those who gave evidence to the committee was Dr Karen Middleton, a Senior Lecturer from the School of Strategy, Marketing and Innovation at the University of Portsmouth. She said: “Digital advertising is the financial engine behind much of the content we see online, including harmful misinformation. The current system is complex, opaque, and profit-driven, and has allowed disinformation networks and even criminal enterprises to thrive unchecked.” She suggested a more robust, joined-up approach that strengthens the Advertising Standards Authority’s remit while aligning the efforts of government regulatory bodies like Ofcom and the data protection watchdog the Information Commissioner’s Office (ICO).
The committee’s chair, Newcastle Labour MP Dame Chi Onwurah, said: “Social media can undoubtedly be a force for good, but it has a dark side. The viral amplification of false and harmful content can cause very real harm – helping to drive the riots we saw last summer. These technologies must be regulated in a way that empowers and protects users, whilst also respecting free speech.
“It’s clear that the Online Safety Act just isn’t up to scratch. The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable. To create a stronger online safety regime, we urge the government to adopt five principles as the foundation of future regulation, ranging from protecting free expression to holding platforms accountable for content they put online.
She called for the discentivising of the viral spread of misinformation, regulation of generative AI, and ‘placing much-needed new standards onto social media companies’. She said: “A national conversation is already under way on this vital issue – we look forward to the government’s response to our report and will continue to examine the consequences of unchecked online harms, particularly for young people, in the months to come.”
Elections
Separately, a report by the Public Administration and Constitutional Affairs Committee (PACAC) of MPs on the July 2024 election pointed to statistics coming out of the Electoral Commission’s research about ‘the significant association between people distrusting politics and people whose primary source of news was social media’.
Incident response
Southport and the injuries at the Liverpool FC victory parade in May show that saying nothing is never a neutral decision, even if it might initially appear to be the risk-free option, according to an analysis, The Cost of Silence: Crisis Communication and Real-world Harm Following Security Incidents, by Sam Stockwell and Dr Al Baker of CETaS (the Centre for Emerging Technology and Security, at the Alan Turing Institute). Silence, delay and ambiguity in a crisis do not buy time but create space. That space will be filled by someone – and, too often, by those who benefit from confusion, suspicion or hate. Organisations leading strategic communication efforts during these acute periods of crisis therefore need to recognise that inaction is a decision with consequences of its own, and that silence has a cost.
Visit the Alan Turing Institute website for the full article.




