• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Bill Gates likens the rise of A.I. to nuclear weapons: ‘It’s not as grim as some people think’

Eleanor Pringle
By
Eleanor Pringle
Eleanor Pringle
Senior Reporter, Economics and Markets
Down Arrow Button Icon
Eleanor Pringle
By
Eleanor Pringle
Eleanor Pringle
Senior Reporter, Economics and Markets
Down Arrow Button Icon
July 13, 2023, 7:00 AM ET
Bill and Melinda Gates Foundation, during the EEI 2023 event in Austin, Texas, US, on Monday, June 12.
Bill Gates says estimations about how good or bad A.I. will be are overdramatic. Jordan Vonderhaar—Bloomberg/Getty Images

Microsoft cofounder Bill Gates is doubling down on his support for artificial intelligence—dismissing fears the technology could destroy humanity and take over the world.

Recommended Video

The billionaire philanthropist has long been a cautionary advocate of the technology, having worked with Sam Altman’s OpenAI since 2016. Microsoft has since poured $13 billion into the ChatGPT maker.

In a blog post on his website Gates Notes, the founder of the Bill & Melinda Gates Foundation likened large language models (LLMs) like ChatGPT and Google’s Bard to other disruptive innovations like cars and nuclear weapons.

Both had filled the public with fear, Gates pointed out, and yet, through a series of guardrails, had been wrestled into a usable form.

“We didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road,” wrote Gates—who just helped mint another A.I. Unicorn.

“Although the world’s nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up.”

Using nuclear weapons as an example, Gates suggested regulators should look to history for a blueprint on how to handle the development of chatbots.

He explained: “For example, it will have a big impact on education, but so did handheld calculators a few decades ago, and, more recently, allowing computers in the classroom. We can learn from what’s worked in the past.”

The result, Gates believes, is that “the future of A.I. Is not as grim as some people think or as rosy as others think.”

It’s this opinion that has previously incurred the wrath of fellow tech titan, Tesla CEO Elon Musk. In a response to one of Gates’ earlier essays on the power of LLMs, Musk lashed out: “I remember the early meetings with Gates. His understanding of A.I. Was limited. Still is.”

At the time Musk was one of the early signatories of an open letter calling a six-month pause on the development of anything more advanced than OpenAI’s GPT-4 chatbot. The Twitter owner has since launched his own A.I. Company, xAI.

A.I. Should be turned on itself

Gates also believes that the problems created by A.I. Can be combated by the technology itself.

Take deepfakes—created by a type of A.I. Called deep learning that can produce realistic videos and images that are digitally altered—which Gates believes have the power to undermine elections and democracy as well as have “horrific emotional impact” on individual victims.

Gates, reportedly worth $134 billion, said he is “hopeful” however, thanks to the fact that A.I. Can not only create deepfakes but also identify them. Intel, he highlighted, has developed a deepfake detector while the government’s Defense Advanced Research Projects Agency is working on technology to identify whether video or audio has been manipulated.

The 67-year-old is also “guardedly optimistic” that the security industry will be able to combat more advanced hackers by using the technology to their own effect.

“A.I. Can be used for good purposes as well as bad ones,” he wrote—pointing out that government and private-sector security teams need to have access to the most up-to-date technology in order to combat such attacks.

Gates made a veiled dig at the Musk-backed development pause for this reason, writing: “This is also why we should not try to temporarily keep people from implementing new developments in A.I., as some have proposed.

“Cybercriminals won’t stop making new tools. Nor will people who want to use A.I. To design nuclear weapons and bioterror attacks. The effort to stop them needs to continue at the same pace.”

Governments need to step up

Gates also addressed two of the major concerns from the public: job losses and changes to education.

On job losses—of which Goldman Sachs has predicted there will be 300 million because of A.I.—Gates squarely placed the responsibility on governments and business: “They’ll need to manage it well so that workers aren’t left behind—to avoid the kind of disruption in people’s lives that has happened during the decline of manufacturing jobs in the United States.”

Across the board—from deepfakes to children using ChatGPT to do their homework—Gates told policymakers they need to “be equipped to have informed, thoughtful dialogue with their constituents,” as well as establishing how closely they would work with other countries on legislation.

Lastly, Gates had advice for the public: engage.

“It’s the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks,” he concluded.

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Eleanor Pringle
By Eleanor PringleSenior Reporter, Economics and Markets
LinkedIn icon

Eleanor Pringle is an award-winning senior reporter at Coins2Day covering news, the economy, and personal finance. Eleanor previously worked as a business correspondent and news editor in regional news in the U.K. She completed her journalism training with the Press Association after earning a degree from the University of East Anglia.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.