Google, GOOGL, has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law
Google has informed the EU that it will not integrate fact-checking into search results or YouTube videos, nor use fact checks in ranking or content removal decisions, despite requirements under a new EU law. This decision, detailed in a letter obtained by Axios, reiterates Google's longstanding stance against incorporating fact-checking into its moderation practices.
In a letter to Renate Nikolay, deputy director general at the European Commission’s content and technology division, Kent Walker, Google’s president of global affairs, stated that the integration of fact-checking as required by the EU’s new Disinformation Code of Practice is "simply not appropriate or effective" for its services. The code would mandate the inclusion of fact-checks alongside search results and YouTube videos and require them to influence ranking algorithms.
Walker defended Google’s current approach, citing its effectiveness during last year’s global election cycle. He also highlighted a new YouTube feature allowing certain users to add contextual notes to videos, describing it as having "significant potential."
The EU’s Code of Practice on Disinformation, initially introduced in 2018, includes voluntary commitments from tech companies and fact-checking organizations. Since the implementation of the Digital Services Act (DSA) in 2022, these commitments have become a focus for the European Commission, which has urged companies to convert them into a binding code of conduct under the DSA.
Walker confirmed that Google has no plans to comply with these fact-checking requirements. He stated that the company would withdraw from all commitments under the current Code of Practice before it becomes a DSA Code of Conduct. However, Google will continue to enhance its existing content moderation practices, such as providing more context for search results and YouTube videos through AI disclosures and SynthID watermarking.
This position underscores ongoing tensions between major tech platforms and regulators regarding the management of misinformation and compliance with content standards.