In 2022, Fair Vote UK worked with a large number of civil society organisations to push the government to tackle harmful misinformation, online hate, incitement of violence, and more. We want to see a regulatory regime that finally puts an end to the “wild-west” era of big tech: where companies siphon your data in a never-ending quest for more engagement, leading to a disinformed public and a fragmented society. For democracy and political debate to function, people need to exist in a common reality. The unregulated realm of big tech is making that increasingly difficult.
For years now, we’ve worked with different Conservative governments towards a robust Online Safety Bill (OSB) in the UK. The bill’s journey through parliament has been a long and messy one. It’s been suspended multiple times and changed substantially on numerous occasions. We worked with the Joint Committee on the Draft bill to highlight the key opportunities and challenges of making regulation work. The OSB started off as a bold and well-intentioned bill, albeit with red flags that needed to be addressed. Three Prime Ministers later, the OSB’s red flags have evolved into fundamental flaws, and the strongest parts of the bill have been removed or watered down beyond recognition.
Our initial concerns arose due to the bill’s general strategy of regulating by exemption, and its focus on removing/censoring content instead of tackling the business model that drives social media platforms to sew division and hate. Instead of building robust freedom of speech protection measures into the bill, the OSB simply leaves entire groups exempt – and does a very poor job of defining who those groups are. The media exemption, for example, means platforms can’t enforce their policies on news publishers and don’t have to enforce them on any “journalistic content”. Qualifying as a news publisher is ludicrously simple, meaning the bill creates giant loopholes for anyone willing to take a few easy steps. “Content of democratic importance”, defined as anything which contributes to political debate, is also exempt. This means that the speech of any politician is automatically shielded from the regime, even if they use inflammatory rhetoric, incite a coup, or make false claims about a stolen election. The last exemption almost completely excludes paid-advertising from the regime, meaning that anyone with a bit of cash can still spread targeted misinformation, as fossil fuel companies are known to do, without consequence.
Despite our continued support of the bill’s objective to make the UK “the safest place in the world to be online”, serious questions have emerged about whether the bill will actually be able to deliver that promise anymore. The latest iteration of the bill saw the removal of the “legal but harmful” duty of care, which required social media platforms to be transparent about how they deal with disinformation and abuse online and apply their rules consistently and in-line with their policies. The new alternative, what the government is calling a “triple shield”, will supposedly allow users to filter out content they don’t want to see. It’s not only unclear whether this is even technologically feasible, but it also fails to address systemic societal harms exacerbated online.
Fair Vote UK has, for years now, fought for a more robust OSB – one that actually achieves what it sets out to do. Taking on big tech and protecting people’s rights online requires a systemic and nuanced focus. The OSB may threaten social media giants with large fines, but it doesn’t hinder the predatory practices that are their bread and butter. That’s what good online regulation does. The OSB needs to look a more like the EU’s Digital Services Act, which looks more at the systemic sources of harm instead of seeking to simply remove content. Here’s some of what the DSA does:
- Legally binding transparency requirements for platforms, showing how they moderate content and how their algorithms work;
- Consumer protection rules around “deceptive design” and “dark patterns”, preventing platforms from manipulating people into buying things or clicking links;
- A ban on targeting people and content amplification using certain types of sensitive data (ie sexual orientation, political affiliation, etc). This goes a long way in addressing the fundamental harms ingrained in the business model of social media;
- Requires social media platforms to tell people why they’re being targeted with certain kinds of content;
- Requires large social media platforms to subject themselves to independent audits and rigorous risk assessments.
Our campaign is not over. If the bill passes in its current form, we’ll keep pushing to ensure that digital regulation in the UK is fit for purpose and actually protects democracy from harm.