• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Sam Altman and other technologists warn that A.I. poses a ‘risk of extinction’ on par with pandemics and nuclear warfare

Tristan Bove
By
Tristan Bove
Tristan Bove
Down Arrow Button Icon
Tristan Bove
By
Tristan Bove
Tristan Bove
Down Arrow Button Icon
May 30, 2023, 10:32 AM ET
OpenAI CEO Sam Altman is warning about A.I'.s existential risks.
OpenAI CEO Sam Altman is warning about A.I'.s existential risks.Win McNamee—Getty Images

Technologists and computer science experts are warning that artificial intelligence poses threats to humanity’s survival on par with nuclear warfare and global pandemics, and even business leaders who are fronting the charge for A.I. Are cautioning about the technology’s existential risks.

Recommended Video

Sam Altman, CEO of ChatGPT creator OpenAI, is one of over 300 signatories behind a public “statement of A.I. Risk” published Tuesday by the Center for A.I. Safety, a nonprofit research organization. The letter is a short single statement to capture the risks associated with A.I.:

“Mitigating the risk of extinction from A.I. Should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter’s preamble said the statement is intended to “open up discussion” on how to prepare for the technology’s potentially world-ending capabilities. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, who are known as two of the Godfathers of A.I. Due to their contributions to modern computer science. Both Bengio and Hinton have issued several warnings in recent weeks about what dangerous capabilities the technology is likely to develop in the near future. Hinton recently left Google so that he could more openly discuss A.I.’s risks.

It isn’t the first letter calling for more attention to be paid to the possible disastrous outcomes of advanced A.I. Research without stricter government oversight. Elon Musk was one of over 1,000 technologists and experts to call for a six-month pause on advanced A.I. Research in March, citing the technology’s destructive potential.

And Altman warned Congress this month that sufficient regulation is already lacking as the technology develops at a breakneck pace. 

The more recent note signed by Altman did not outline any specific goals like the earlier letter, other than fostering discussion. Hinton said in an interview with CNN earlier this month that he did not sign the March letter, saying that a pause on A.I. Research would be unrealistic given the technology has become a competitive sphere between the U.S. And China.

“I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on A.I because if people in America stop, people in China wouldn’t.”

But while executives from leading A.I. Developers including OpenAI and even Google have called on governments to move faster on regulating A.I., some experts warn that it is counter-productive to discuss the technology’s future existential risks when its current problems, including misinformation and potential biases, are already wreaking havoc. Others have even argued that by publicly discussing A.I.’s existential risks, CEOs like Altman have been trying to distract from the technology’s current issues which are already creating problems, including facilitating the spread of fake news just in time for a pivotal election year.

But A.I.’s doomsayers have also warned that the technology is developing fast enough that existential risks could become a problem faster than humans can keep tabs on. Fears are growing in the community that superintelligent A.I., which would be able to think and reason for itself, is closer than many believe, and some experts warn that the technology is not currently aligned with human interests and well-being.

Hinton said in an interview with the Washington Post this month that the horizon for superintelligent A.I. Is moving up fast and could now be only 20 years away, and now is the time to have conversations about advanced A.I.’s risks.

“This is not science fiction,” he said.

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Tristan Bove
By Tristan Bove
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.