LONDON – The European Union is close to reaching an agreement on a set of new rules aimed at protect Internet users by forcing big tech companies like Google and Facebook to step up their efforts to curb the spread of illegal content, hate speech and misinformation.
EU officials were negotiating the final details of the legislation, dubbed the Digital Services Act, on Friday. It’s part of a sweeping overhaul of the 27-nation bloc’s digital rulebook, underscoring the EU’s position at the forefront of the global movement to Harness the power of online platforms and social media companies.
While the rules still need to be approved by the European Parliament and the European Council which represent the 27 member countries, the bloc is far ahead of the United States and other countries to develop regulations for tech giants to compel them to protect people from harmful content proliferating online.
Negotiators from the EU’s Executive Commission, member countries and France, which holds the EU’s rotating presidency, were scrambling to strike a deal by the end of Friday, ahead of Sunday’s French elections.
The new rules, which aim to protect Internet users and their “fundamental rights online”, make tech companies more responsible for content on their platforms. Social media platforms like Facebook and Twitter should strengthen mechanisms to flag and remove illegal content like hate speech, while online marketplaces like Amazon should do the same for questionable products like counterfeit sneakers or dangerous toys. .
These systems will be standardized so that they work the same on any online platform.
This means that “any national authority will be able to request the removal of illegal content, regardless of where the platform is established in Europe,” EU Single Market Commissioner Thierry Breton said on Twitter.
Companies that break the rules face fines of up to 6% of their annual global revenue, which for tech giants would amount to billions of dollars. Repeat offenders could be banned from the EU market.
Google and Twitter declined to comment. Amazon and Facebook did not respond to requests for comment.
The Digital Services Act also includes measures to better protect children by prohibiting advertising directed at minors. Online advertisements targeting users based on their gender, ethnicity and sexual orientation would be prohibited.
There would also be a ban on so-called dark patterns – deceptive techniques to trick users into doing things they didn’t intend to do.
Tech companies should perform regular risk assessments on illegal content, misinformation, and other harmful information, and then report whether they’re doing enough to address the problem.
They will need to be more transparent and provide information to regulators and independent researchers about content moderation efforts. This could mean, for example, that YouTube transmits data indicating whether its recommendation algorithm directed users to more Russian propaganda than normal.
To enforce the new rules, the European Commission would need to hire more than 200 new staff. To pay for it, tech companies will be charged a “monitoring fee,” which could amount to up to 0.1% of their annual global net income, according to negotiations.
The EU reached a similar political agreement last month on its Digital Markets Act, separate legislation aimed at limiting the power of tech giants and making them treat smaller rivals fairly.
Meanwhile, Britain has drafted its own online safety legislation that provides jail terms for senior tech company executives who fail to comply.
Copyright 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.