In a new report, analysing X’s open sourced algorithm, human rights organisation Amnesty International concludes that the social media platform favoured “contentious engagement over safety” during the summer 2024 riots in the UK.
Following the mass stabbing of young girls at a dance class in Southport, near Liverpool, on 29 July 2024 – violent far-right, anti-Muslim and anti-migrant riots erupted across the UK between 30 July and 5 August. The unrest involved attacks, looting, and arson, and led to the arrest of more than 1,000 rioters by police.
X, the Elon Musk-owned platform, had already positioned itself as pro free speech and anti-censorship, claiming it wanted to be the world’s digital town square – but Amnesty International’s study suggests that decisions the company took to amplify certain types of content can be linked to violent real-world outcomes.
“Our analysis shows that X’s algorithmic design and policy choices contributed to heightened risks amid a wave of anti-Muslim and anti-migrant violence observed in several locations across the UK last year,” said head of Big Tech accountability at Amnesty International Pat de Brún.
He added that X’s approach “continues to present a serious human rights risk today”.
The research is based on analysis of X’s own code, published in March 2023.
According to the not-for-profit’s report, X’s recommender system “systematically prioritises content that sparks outrage and provokes heated exchanges” rather than preventing harm.
The report also claims that the platform’s verified subscribers system comes with “built-in amplification biases” – meaning paying subscribers’ posts on X are “automatically promoted over those of ordinary users”, regardless of veracity or sentiment.
Amnesty International further suggests that Musk’s personal involvement has fuelled hate on the platform, noting his takeover coincided with the firing of safety engineers and the reinstatement of accounts which had been previously banned for hate speech.
“We are committed to keeping X safe for all our users,” an X spokesperson told Euractiv when reached for comment. “Our safety teams use a combination of machine learning and human review to proactively take swift action against content and accounts that violate our rules.”
Following the summer 2024 riots, UK communications regulator Ofcom warned social media platforms against letting their tools be used to incite hatred. The government also said it would review online safety legislation in case existing laws needed strengthening.
“Ofcom has robust enforcement powers [and] is also consulting on further safety measures, including requiring platforms to have crisis response protocols to stop harmful content spreading during moments of heightened risk,” a UK government spokesperson told Euractiv.
(nl)