NEWS

Self-Contradictory Online Safety Bill Falls Short

May 21, 2021

The government released its new Online Safety Bill Draft on May 12th, which the DCMS committee claims will “keep children safe, stop racial hate, and protect democracy online”. The draft bill unfortunately falls far short of what is necessary to achieve those goals. 

The new regulations impose a “duty of care” on digital platforms to monitor harmful and false content themselves. Ofcom, the UK’s regulator of broadcast media and telecommunications, is set to take on the additional role of internet watchdog, with powers to sanction companies as much as £18 million or 10% of global turnover (whichever is higher). It will also have the capacity to block access to sites entirely. 

The legislation, attempting to carve out protections for journalism and freedom of speech, ultimately contradicts itself. Online publication websites will not be in scope of this legislation, and articles shared on social media from “recognised news publishers” will be exempted from the usual requirements. Platforms will have to grant fast-track appeals processes to journalists whose content was removed, and “will be held to account by Ofcom for the arbitrary removal of journalistic content”, according to the DCMS committee. This creates a safe zone for certain types of content, which unfortunately are very poorly categorised. 

For example, the legislation sets out provisions for citizen journalists, whose content will be treated the same as professional journalism content – that is, exempt from hate speech and misinformation requirements. Anyone can be a citizen journalist and use those provisions to spread hateful speech or false and misleading content. 

Similarly, “political opinion” is protected from the new requirements, allowing racist, homophobic, transphobic speech to be laundered as “political” and therefore out of scope. 

The legislation attempts to frame tech regulation as a free speech issue which is misleading and inaccurate. Nobody is arguing that a person doesn’t have a right to free speech. The use of a privately owned social media company’s platform to freely reach millions of people regardless of the harm caused is not even comparable.

The problem is that the legislation, in the name of free speech, provides no functional mechanism for distinguishing between content that is in scope and out of scope, giving tech companies the capability to ignore their duty of care in many, many instances. 

This gets to the core of why this legislation falls so short – social media companies are still largely responsible for regulating content on their platforms and the bill gives them many openings to neglect doing that duty diligently. 

The core cause of harm online has more to do with the for-profit business model and the algorithm-driven delivery that all favour extremist content because it drives more engagement, which leads to more ad sales, which leads to more money. These companies are never going to tackle the problem on their own, and as long as bills like this one give them a viable excuse to not enforce and moderate properly, they will continue not to. Supposed “self-regulation” models – like the Facebook Oversight Board –  are not viable for the same, very obvious reason: platforms are not fit to regulate themselves because their business interests diverge from societal interests. Could you imagine allowing big tobacco to set up and fund their own oversight board to make determinations about who they should sell cigarettes too? It’s utter nonsense disguised by a layer of legitimacy. These companies will not make hard choices that hurt their bottom line. It’s up to the democratically elected governments to step in on behalf of citizens, with societal concerns at the forefront.

No items found.

Join us for a fair vote

We send periodic updates about our progress with opportunities to engage directly in the work.