Today, the Online Safety Bill moves to the House of Lords. Practically none of its biggest flaws have been addressed, and it looks like we'll be relying on peers to address all of the bill's glaring inconsistencies and concerning loopholes.
However, some changes are in the works. Notably, the government has approved an amendment for executive liability, meaning that high-level managers could be held criminally liable for failures to disclose required information to Ofcom. While it was withdrawn from the commons, we understand that the government plans to re-introduce the amendment during the "ping-pong" stage between the Lords and Commons later in 2023.
Back in December, it was also made clear that the so-called "legal but harmful" provisions would be removed. While the words "legal but harmful" never actually appeared in the text, this refers to sections that made platforms responsible for moderating content which could cause harm to adults (but wasn't technically illegal). While inelegant, it was the only pathway for the bill to contend with harmful disinformation and racist/homophobic/misogynistic abuse online. Their new strategy is – worryingly – to simply expand the list of illegal content, creating new communications offences that jeopardise freedom of expression.
Yesterday, we heard of plans for future amendments in the Lords that would make it illegal to share videos of migrant crossings which portray them in a "positive light". The new offence relates to a broader inhumane crackdown on migration under this government, increasingly ruled by the whims of a xenophobic and hyper-nationalist minority. In tandem with the massive powers given to the secretary of state under this bill and the threat to end-to-end encryption, there are serious concerns about how the price this bill would have us pay for online safety – and if it these provisions would actually work.
This is just one element of what the government is now calling the "triple shield". It not only entails the requirement for platforms to remove illegal content, but two additional (supposed) layers of protection. Platforms will also need to consistently remove material that violates their terms and conditions and adopt "user-empowerment" measures that allow people to filter out content they find offensive.
It's unclear if these user empowerment tools are even technologically feasible, and it certainly doesn't address the systemic dangers of allowing for far-right echo chambers to lead to insurrections or coups – as we saw in both Brazil and the United States.
As the bill moves to the Lords, it's looking increasingly like the bill is only getting worse. The red flags which we identified early in the bill's process have evolved into fundamental flaws, and the strongest parts of the bill have been watered down or removed entirely.
Our campaign is not over. If the bill passes in its current form, we’ll keep pushing to ensure that digital regulation in the UK is fit for purpose and actually protects democracy from harm.