The UK wants to get tough on ‘harmful content’ within apps, on social networks and on websites – and is consulting on new legislation which could see companies like Apple, Google, Facebook and Twitter fined up to 4% of their worldwide turnover if they don’t act quickly to remove it.
Government minister Jeremy Wright said that “the era of self-regulation for online companies is over” …
The plans are in response to both footage of the terrorist attack in New Zealand, and the death of a 14-year-old girl who took her own life after viewing material advocating self-harm and suicide on Instagram.
The BBC reports on the plans.
The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office. A public consultation on the plans will run for 12 weeks.
The paper suggests:
- establishing an independent regulator that can write a “code of practice” for social networks and internet companies
- giving the regulator enforcement powers including the ability to fine companies that break the rules
- considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules
The government argues that self-regulation of tech giants has proven ineffective, and that legislation is now required. It believes that potentially massive fines are needed to force companies to take legislation seriously.
Discussing potential penalties on BBC Breakfast, Wright said: “If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company’s turnover… we think we should be looking at something comparable here.”
The ‘harmful content’ label is a controversial one. It would cover things which are already illegal, such as promoting terrorist content, hate crimes and revenge pornography, but would also seek to ban things like advocating self-harm.
[This] became a prominent issue after 14-year-old Molly Russell took her own life in 2017. After she died her family found distressing material about depression and suicide on her Instagram account. Molly’s father holds the social media giant partly responsible for her death.
Critics say that such a broad definition of harmful content would stifle free speech.
Jim Killock, executive director of Open Rights Group, said the government’s proposals would “create state regulation of the speech of millions of British citizens”.
Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further. He said: “The government should be ashamed of themselves for leading the western world in internet censorship. The proposals are a historic attack on freedom of speech and the free press.”
Two tech companies have given a cautious welcome to the proposals.
Rebecca Stimson, Facebook’s head of UK policy, said: “New regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone. New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech.”
Twitter’s head of UK public policy Katy Minshall said: “We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet.”
Neither Apple nor Google had commented at the time of writing.
While the UK is simply consulting on the proposed legislation, Australia has already implemented its own version limited to violent content. That would give tech companies as little as one hour to remove material following notification by the government.
FTC: We use income earning auto affiliate links. More.
Comments