Skip to content

Google is releasing details on how to combat disinformation ahead of midterm elections

Google is releasing details on how to combat disinformation ahead of midterm elections

Google is releasing details on how to combat disinformation ahead of midterm elections
Google is releasing details on how to combat disinformation ahead of midterm elections

Tech Companies Tackle Disinformation Ahead of Midterms

In preparation for the forthcoming midterms, tech giants are rolling out new measures to combat disinformation. Google plans to release a tool in the coming weeks, highlighting local and regional news related to campaigns and races, indicating states and key election dates based on the user's location, and providing acceptable voting options information.

YouTube, meanwhile, has announced that it will be promoting mainstream news sources and displaying precise election-related information under videos in English and Spanish. The platform also aims to prevent users from being algorithmically recommended harmful election misinformation.

These steps represent the latest efforts by major tech platforms to convince the public to gear up for an election campaign with high stakes that could drastically alter the US Congress, including pending legislation on self-regulation in the platform war. Despite endeavors to address unfounded claims, misleading information, and betting scandals, many fundamental issues, such as unsubstantiated allegations of vote rigging and misrepresentations of election results, still loom unsolved, in some cases exacerbated by political candidates vying for office in this election cycle.

Experts warn of continued refinement of dirty tactics by extremists and others aiming to pollute information environments, despite tech companies' commitments to vigilance. Their activities could create new vulnerabilities that the platforms failed to account for.

As for YouTube, it has started removing videos breaching policy guidelines and spreading falsehoods about the 2020 election. YouTube explained that it is tackling videos that violate election guidelines, spread baseless claims of widespread vote fraud, false claims of elections being stolen, or manipulated, while the Commission on Foreign Influence over the 2020 U.S. Presidential Election (CFIUS) investigation is ongoing.

Twitter and Meta (Facebook and Instagram's parent company) maintain different tactics during the 2020 midterms. Twitter banned inflammatory statements targeting the undermining of public trust in official election results in connection to its Civic Integrity Policy. Tweets questioning results may be labeled or excluded from the platform, though they may not outright be deleted.

However, Meta will focus on removing false claims about who can vote and election-related calls to violence despite not explicitly banning election fraud claims and not deleting such posts. Meta also claimed that users had voiced concerns about hashtags appearing too frequently, potentially leading to fewer tags in use compared to 2020.

Twitter, for its part, reportedly tested new tags for false information in 2020 with the aim of reducing their spread effectively, suggesting it might lean more heavily on tagging as a strategy. In contrast, Meta might employ fewer tags to cater to user feedback that they were overused.

Karen Kornbluh, Director of the Digital Innovation and Democracy Program at the German Marshall Fund, emphasizes that tech companies must reflect more critically on the system's underlying design, recognizing the promotion of random contents and fostering user manipulation. She also cited the Facebook whistleblower's revelations that the platform algorithm bolstered extremist groups as evidence of design flaws.

Technology companies have a multi-faceted challenge to address falsehoods, erroneous information, and facilitate reliable resources. They must critically consider their core functions and address systemic weaknesses to thwart the impact of disinformation on the electoral process.

Latest