According to Trump's AI czar, your aversion to AI isn't rooted in genuine dislike; rather, it's the result of a $1 billion scheme by the 'Doomer Industrial Complex' to manipulate your thinking.

By Eva RoytburgFellow, News
Eva RoytburgFellow, News

    Eva is a fellow on Coins2Day's news desk.

    David Sacks, U.S. President Donald Trump's AI and Crypto Czar, speaks to press outside of the White House on March 07, 2025 in Washington, DC.
    David Sacks, who serves as U.S. President Donald Trump's AI and Crypto Czar, addressed reporters outside the White House in Washington, DC, on March 07, 2025.
    Kayla Bartkowski/Getty Images

    Artificial intelligence might be the most widely adopted technology of modern history, yet it's also among the least trusted

    TL;DR

    • David Sacks claims AI aversion stems from a $1 billion "Doomer Industrial Complex" scheme.
    • This complex allegedly manipulates public thinking about AI risks and dangers.
    • The scheme is funded by figures like Sam Bankman-Fried and Dustin Moskovitz.
    • Sacks contrasts US AI skepticism with China's more positive view, citing propaganda.

    David Sacks insists finds that disconnect isn't due to AI threatening your job, privacy, and the future of the economy itself.. Instead, the venture capitalist and former Trump advisor believes it's all part of a $1 billion scheme. Through what he labels the “Doomer Industrial Complex,” a clandestine consortium of Effective Altruist magnates funded by figures such as the convicted FTX chief Sam Bankman Fried and Facebook co-creator Dustin Moskovitz. 

    This week, Sacks contended in an X post that public skepticism toward AI is not naturally occurring but rather deliberately created. He referenced studies from tech-culture academic Nirit Weiss-Blatt, who has dedicated years to charting the “AI doom” landscape of policy groups, non-governmental organizations, and futurists.

    Weiss-Blatt documents a multitude of organizations advocating for stringent oversight or outright bans on sophisticated AI technologies. She contends that a significant portion of the funding for these groups originates from a select few contributors within the Effective Altruism community, such as Facebook co-founder Dustin Moskovitz, Skype's Jaan Tallinn, and Ethereum Creator Vitalik Buterin, and convicted FTX founder Sam Bankman-Fried.

    Weiss-Blatt states that these philanthropists have collectively invested over $1 billion in initiatives aimed at researching or reducing the impact of “existential risk” stemming from AI. However, she pointed at Moskovitz’s organization, Open Philanthropy, as “by far” the largest donors. 

    The group vehemently rejected the notion that they were presenting overly dramatic, futuristic predictions of disaster.

    “We believe that technology and scientific progress have drastically improved human well-being, which is why so much of our work focuses on these areas,” an Open Philanthropy spokesperson told Coins2Day. “AI has enormous potential To speed up scientific discovery, boost economic expansion, and broaden our understanding of the universe, it also presents certain novel dangers—a perspective echoed by figures from across the political divide. We advocate for considered, non-partisan efforts aimed at mitigating those risks and harnessing AI's substantial potential benefits.

    But Sacks, connected to Silicon Valley's venture capital circles and served as an early executive at PayPal, claims that funding from Open Philanthropy has done more than just warn of the risks– it’s bought a global PR campaign warning of “Godlike” AI. He cited polling showing that 83% of respondents in China view AI’s benefits as outweighing its harms — compared with just 39% in the United States — as evidence that what he calls “propaganda money” has reshaped The U.S. Discussion.

    Sacks has long pushed for an industry-friendly, no regulation approach to AI –and technology broadly—framed in the race to beat China. 

    Sacks’ venture capital firm, Craft Ventures, did not immediately respond to a request for comment.

    What is Effective Altruism?

    The “propaganda money” Sacks refers to comes largely from the Effective Altruism (EA) community, a wonky group of idealists, philosophers, and tech billionaires who believe humanity’s biggest moral duty is to prevent Impending disasters, such as out-of-control artificial intelligence.

    The EA movement, founded a decade ago founded by Oxford philosophers William MacAskill and Toby Ord, advocates for donors to employ data and logic to achieve the greatest possible positive impact. 

    That framework led some members to focus on “longtermism,” the idea that preventing existential risks such as pandemics, nuclear war, or rogue AI should take priority over short-term causes.

    While some EA-aligned organizations advocate heavy AI regulation or even “pauses” in model development, others – like Open Philanthropy– take a more technical approach, funding alignment research at companies like OpenAI and Anthropic. The movement’s influence grew rapidly before the 2022 collapse of FTX, whose founder Bankman-Fried had been one of EA’s biggest benefactors.

    Matthew Adelstein, a 21-year-old college student who has a prominent Substack on EA, notes that the landscape is far from the monolithic machine that Sacks describes. Weiss-Blatt’s own map of the “AI existential risk ecosystem” includes hundreds of separate entities — from university labs to nonprofits and blogs — that share similar language but not necessarily coordination. Yet, Weiss-Blatt deduces that though the “inflated ecosystem” is not “a grassroots movement. It’s a top down one.” 

    Adelstein disagrees, noting that the reality is “more fragmented and less sinister” than Weiss-Blatt and Sacks portrays.

    “Most of the fears people have about AI are not the ones the billionaires talk about,” Adelstein told Coins2Day. “People are worried about cheating, bias, job loss — immediate harms — rather than existential risk.”

    He argues that pointing to wealthy donors misses the point entirely. 

    “There are very serious risks from artificial intelligence,” he said. “Even AI developers think there’s a few-percent chance it could cause human extinction. The fact that some wealthy people agree that’s a serious risk isn’t an argument against it.”

    To Adelstein, longtermism isn’t a cultish obsession with far-off futures but a pragmatic framework for triaging global risks. 

    “We’re developing very advanced AI, facing serious nuclear and bio-risks, and the world isn’t prepared,” he said. “Longtermism just says we should do more to prevent those.”

    He also brushed off accusations that EA has turned into a quasi-religious movement.

     “I’d like to see the cult that’s dedicated to doing altruism effectively and saving 50,000 lives a year,” he said with a laugh. “That would be some cult.”