• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechMeta

Meta is rolling back hate speech rules along with fact-checking and Zuckerberg says ‘recent elections’ are a catalyst

By
Barbara Ortutay
Barbara Ortutay
and
The Associated Press
The Associated Press
Down Arrow Button Icon
By
Barbara Ortutay
Barbara Ortutay
and
The Associated Press
The Associated Press
Down Arrow Button Icon
January 9, 2025, 5:32 AM ET
Mark Zuckerberg talks about the Orion AR glasses during the Meta Connect conference on Sept. 25, 2024, in Menlo Park, Calif.
Mark Zuckerberg talks about the Orion AR glasses during the Meta Connect conference on Sept. 25, 2024, in Menlo Park, Calif.Godofredo A. Vásquez—AP

It wasn’t just fact-checking that Meta scrapped from its platforms as it prepares for the second Trump administration. The social media giant has also loosened its rules around hate speech and abuse — again following the lead of Elon Musk’s X — specifically when it comes to sexual orientation and gender identity as well as immigration status.

Recommended Video

The changes are worrying advocates for vulnerable groups, who say Meta’s decision to scale back content moderation could lead to real-word harms. Meta CEO Mark Zuckerberg said Tuesday that the company will “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse,” citing “recent elections” as a catalyst.

For instance, Meta has added the following to its rules — called community standards — that users are asked to abide by:

“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’” In other words, it is now permitted to call gay people mentally ill on Facebook, Threads and Instagram. Other slurs and what Meta calls “harmful stereotypes historically linked to intimidation” — such as Blackface and Holocaust denial — are still prohibited.

The Menlo Park, California-based company also removed a sentence from its “policy rationale” explaining why it bans certain hateful conduct. The now-deleted sentence said that hate speech “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.”

“The policy change is a tactic to earn favor with the incoming administration while also reducing business costs related to content moderation,” said Ben Leiner, a lecturer at the University of Virginia’s Darden School of Business who studies political and technology trends. “This decision will lead to real-world harm, not only in the United States where there has been an uptick in hate speech and disinformation on social media platforms, but also abroad where disinformation on Facebook has accelerated ethnic conflict in places like Myanmar.”

Meta, in fact, acknowledged in 2018 that it didn’t do enough to prevent its platform from being used to “incite offline violence” in Myanmar, fueling communal hatred and violence against the country’s Muslim Rohingya minority.

Arturo Béjar, a former engineering director at Meta known for his expertise on curbing online harassment, said while most of the attention has gone to the company’s fact-checking announcement Tuesday, he is more worried about the changes to Meta’s harmful content policies.

That’s because instead of proactively enforcing rules against things like self-harm, bullying and harassment, Meta will now rely on user reports before it takes any action. The company said it plans to focus its automated systems on “tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.”

Béjar said that’s even though “Meta knows that by the time a report is submitted and reviewed the content will have done most of its harm.”

“I shudder to think what these changes will mean for our youth, Meta is abdicating their responsibility to safety, and we won’t know the impact of these changes because Meta refuses to be transparent about the harms teenagers experience, and they go to extraordinary lengths to dilute or stop legislation that could help,” he said.

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Authors
By Barbara Ortutay
See full bioRight Arrow Button Icon
By The Associated Press
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.