• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

How Google, Facebook, Twitter, and YouTube plan to handle misinformation surrounding 2020 presidential election results

By
Danielle Abril
Danielle Abril
Down Arrow Button Icon
By
Danielle Abril
Danielle Abril
Down Arrow Button Icon
September 10, 2020, 6:00 PM ET

Google, YouTube, Facebook, and Twitter are preparing for an unprecedented hurdle they may face on the night of Nov. 3: not knowing who won the 2020 presidential election. 

A massive number of voters are expected to vote by mail, at least partially driven by a desire to avoid contracting the coronavirus. It takes longer to count absentee ballots than in-person ones, and any difficulties or delays could ultimately postpone election results by days or weeks, which could allow election misinformation and false claims of victory to go viral.

Google, YouTube, Facebook, and Twitter will be under increased pressure to control election-related misinformation, which the social media giants have historically struggled to police. Politicians, political campaigns, foreign actors, and even average users have long used the services to disseminate false claims about candidates, and, in some cases, undermine the credibility of this year’s election given its unique circumstances. 

The four services recently announced new policies aimed at mitigating false claims of victory. Here’s what they plan to do.

Google

Google is aiming to provide users with quick reliable information on the election results with help from partners. 

The search giant plans to promote information from partners like the Associated Press and the nonprofit Democracy Works in a box atop the search results page. The company also said it also has ranking protections in place to ensure that reports that claim early victory will not appear in the search results. 

“I have extreme confidence that the team will handle this algorithmically, but if a challenging piece of information slips through, our policies will allow us to take that down,” said David Graff, Google’s senior director of global policy and standards, during a call with reporters on Thursday.

On the advertising side, Google said it already has policies that prohibit advertisers from using doctored or manipulated media or false claims that could undermine voter participation or trust in the election. Any ads that violate that policy are removed from Google.

YouTube

YouTube is taking steps to try to mitigate election misinformation and promote news from credible organizations.

Beginning Nov. 3, YouTube said on all videos about the election, it will attach an information panel that suggests the results may not be finalized, linking users to Google’s election results box. It also will remove videos that misleads viewers about the voting process, encourages voter interference, or that have been manipulated to mislead viewers and poses a serious risk of harm.

To help combat misinformation, the company said it will promote information from news organizations and reduce the distribution of election misinformation.

Facebook

Facebook recently introduced a new policy directed at handling false claims of victory from candidates and political campaigns, but what’s unclear is how the company will handle misinformation about election results from users. 

Last week, Facebook CEO Mark Zuckerberg announced that the company was partnering with Reuters and the National Election Pool to provide authoritative information about the results of the election. The information will be featured in the Voting Information Center, a hub on Facebook that provides users with information from authoritative sources. Facebook plans to proactively notify users when the election results become available.

If any candidate or campaign declares victory before the results are determined, Facebook will include a label telling users that the election has not yet been determined. The label also will direct people to the official results. The company reportedly will not label posts from media organizations that prematurely announce the election winner.

Facebook has yet to announce how it will handle user posts that spread misinformation about the election results. The company said it’s still finalizing those details.  

On Oct. 7, Facebook also announced that it will temporarily stop running all political ads after the polls close on Nov. 3, in addition to banning new political ads one week prior to the election. The company did not specify when it intends to end the ban.

The company also halted recommendations for political and social issue groups leading up to the election. It’s unclear when the company will lift that restriction.

Twitter

Twitter updated its civic integrity policy on Sept. 10, saying it plans to label or remove any misleading information about the election results or any disputed claims that could undermine the faith in the election itself. One month later, it expanded and clarified that policy.

Twitter said that it plans to label tweets that falsely claim election victory and will remove tweets that encourage violence or interference with polling places. On Nov. 2, the company elaborated: Labels will apply to candidates, campaigns, and users with more than 100,000 followers or those that have 25,000 likes, retweets, or quote retweets.

Twitter said it will consider the election results official when at least two of the following media organizations have made the announcement: The Associated Press, NBC News, ABC News, CBS News, CNN, Fox News, and Decision Desk HQ, a private company that’s tracking the election.

The company also is adding new restrictions and warnings to tweets with misinformation from politicians, users with more than 100,000 followers, or users who receive a lot of retweets and replies. The tweets will be obscured by a warning, alerting users that the claims have been disputed, and users will have to tap before seeing the tweet. Users won’t be able to reply, and they’ll only be able to retweet the tweets if they include their own comments along them.

Twitter users are also now prompted when they are about to amplify misinformation before they are able retweet a flagged tweet.

Twitter previously said it will evaluate a tweet’s potential to cause harm when determining whether it will be removed. Content that has the propensity to create specific harm will be removed, whereas tweets that mischaracterize or represent general harm will be labeled as such.

About the Author
By Danielle Abril
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.