Social networks will be banned from discriminating against particular political viewpoints and required to protect “democratically important” content, the UK government has announced, as its landmark online safety bill finally heads to parliament.
The additions to the bill, based on a white paper first drafted by Theresa May’s government in 2019, would make the UK one of the first nations in the west to require social networks to take active steps to moderate their impact on the democratic process. But there are fears that the requirement could lead to social networks refusing to take action against harmful content in case it is decreed to be democratically important.
Under the measures, “category 1” services – the largest and most popular social networks – will need to implement rules that protect “democratically important” content such as posts promoting or opposing government policy or a political party before a vote in parliament, an election or a referendum, or campaigning on a live political issue.
They will also be banned from discriminating against particular political viewpoints and will need to apply protection equally across political opinions.
As an example, the government said a company’s rules against content depicting graphic violence could include exceptions to allow campaign groups to raise awareness about the issue, “but it would need to be upfront about the policy and ensure it is applied consistently”.
Such a requirement has been regularly proposed in the US, where accusations of moderation bias against the Republican party have become more frequent than ever since Donald Trump was removed from most major social networks. If the online safety bill passes this year, the UK will be the first country to actively impose such a restriction on social networks.
The newest version of the bill also includes tighter protections for journalism. News websites were already explicitly exempt from much of the law’s remit, assuaging concerns that publications could be censored if they failed to moderate their comment sections. Now the draft bill includes additional protections for journalistic content posted to social networks, including from “citizen journalists”. Social networks will need to have “a fast-track appeals process” for journalists, and “will be held to account by Ofcom for the arbitrary removal of journalistic content”.
The bill also contains new requirements on platforms to act against online fraud, expanding the scope of the harms covered by the legislation. Platforms will be required to take responsibility for scams perpetrated by their users, such as romance scams and fake investment opportunities.
Ofcom will be the regulator in charge of enforcing the new regulations, and its chief executive, Melanie Dawes, welcomed the legislation.
“Today’s bill takes us a step closer to a world where the benefits of being online, for children and adults, are no longer undermined by harmful content,” she said. “We’ll support parliament’s scrutiny of the draft bill, and soon say more about how we think this new regime could work in practice – including the approach we’ll take to secure greater accountability from tech platforms.”