Britain’s Online Safety Bill, which aims to regulate the internet, has been revised to remove a controversial but essential measure.
Matt Cardy | Getty Images News | Getty Images
LONDON — Social media platforms like Facebook, TikTok and Twitter will no longer be forced to remove “legal but harmful” content as part of revisions to the UK’s proposed legislation for online safety.
The Online Safety Bill, which seeks to regulate the internet, will be revised to remove the controversial but critical measure, UK lawmakers announced on Monday.
The government said the amendment would help safeguard freedom of expression and give people greater control over what they see online.
However, critics described the move as a “major weakening” of the bill, which risks undermining the liability of tech companies.
Previous proposals would have tasked tech giants with preventing people from seeing legal but harmful content, such as self-harm, suicide and abusive messages online.
Under the overhauls – which the government has dubbed a consumer-friendly “triple shield” – responsibility for selecting content will instead fall to internet users, with tech companies instead to introduce a system for people to filter out harmful content ‘they do. don’t want to see.
Most importantly, companies should always protect children and remove content that is illegal or prohibited in their terms of service.
“Empowering adults”, “preserving freedom of expression”
UK Culture Secretary Michelle Donelan said the new plans would ensure that no “tech company or future government could use the laws as a license to censor legitimate opinions”.
“Today’s announcement refocuses the Online Safety Bill on its original objectives: the urgent need to protect children and combat criminal activity online while safeguarding freedom of expression, ensuring technology companies are accountable to their users and enabling adults to make more informed choices about the platforms they use,” the government said in a statement.
The opposition Labor Party said the amendment was a “major weakening” of the Bill, with the potential to fuel misinformation and conspiracy theories.
Replacing harm prevention with an emphasis on freedom of expression undermines the very purpose of this bill.
Lucy Powell
Shadow Culture Secretary, Labor Party
“Replacing harm prevention with a focus on free speech undermines the very purpose of this bill and will embolden abusers, COVID deniers, hoaxers, who will feel emboldened to thrive online,” a said Shadow Culture Secretary Lucy Powell.
Meanwhile, suicide risk charity group Samaritans said increased user controls should not replace accountability for tech companies.
“Increasing the controls people have is no substitute for holding sites accountable through law and it’s a lot like government snatching defeat from the jaws of victory,” said Julie Bentley, Samaritans’ chief executive.
The devil in detail
Monday’s announcement is the latest iteration of the UK’s sweeping online safety bill, which also includes guidance on identity verification tools and new criminal offenses to tackle fraud and pornography. of revenge.
This follows months of campaigning by free speech advocates and online protection groups. Meanwhile, Elon Musk’s acquisition of Twitter has renewed the focus on online content moderation.
The proposals are now expected to return to the UK Parliament next week, before becoming law before next summer.
However, commentators say further fine-tuning of the bill is needed to ensure the loopholes are closed by then.
“The devil will be in the details. There is a risk that Ofcom’s monitoring of social media terms and conditions and ‘consistency’ requirements will encourage overzealous deletions,” said Matthew Lesh, responsible for public policy in the think tank on the free market. the Institute of Economic Affairs, said.
Communications and media regulator Ofcom will be responsible for much of the enforcement of the new law and can fine companies up to 10% of their global revenue for non-compliance.
“There are also other issues that the government has not addressed,” Lesh continued. “The requirement to remove content that companies are ‘reasonably likely to infer’ is illegal, sets an extremely low threshold, and risks preemptive automated censorship.”
#Britain #tempers #controversial #plans #force #Big #Tech #remove #harmful #content