Prince Harry and Meghan, his wife, have teamed up with notable computer scientists, economists, artists, evangelical Christian leaders, and conservative American commentators Steve Bannon and Glenn Beck to advocate for a prohibition on AI “superintelligence” that poses a threat to humankind.
TL;DR
- Prince Harry and Meghan joined scientists and commentators to call for a ban on AI superintelligence development.
- The letter targets tech giants like Google, OpenAI, and Meta Platforms, urging caution and safety measures.
- Signatories include AI pioneers, business leaders, and political figures from diverse backgrounds.
- The group seeks broad scientific consensus and public buy-in before advancing superintelligence.
Released on Wednesday, the letter from a varied group of public figures, both politically and geographically, is specifically targeting major tech companies such as Google, OpenAI, and Meta Platforms. These companies are in a competition to develop artificial intelligence capable of exceeding human performance across numerous activities.
The letter calls for a ban unless some conditions are met
The 30-word statement says:
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”
The letter begins by stating that AI technologies could usher in health and prosperity, but alongside these advancements, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”
Who signed and what they’re saying about it
Prince Harry included a personal message stating that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.”
The Duke of Sussex was joined in signing by his wife, Meghan, the Duchess of Sussex.
Another signatory, Stuart Russell, a computer science professor and AI pioneer at the University of California, Berkeley, wrote, “This is not a ban or even a moratorium in the usual sense,”. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
AI pioneers Yoshua Bengio and Geoffrey Hinton, who are co-winners of the Turing Award, computer science’s highest honor, also signed. Hinton was also a Nobel Prize recipient in physics last year. Both have actively highlighted the risks associated with a technology they were instrumental in developing.
However, the roster features unexpected names like Bannon and Beck, reflecting an effort by the letter's proponents at the nonprofit Future of Life Institute to connect with President Donald Trump's Make America Great Again agenda, despite the Trump administration's efforts to ease restrictions on AI advancement within the U.S.
Also featured are Apple co-founder Steve Wozniak; British magnate Richard Branson; former U.S. Joint Chiefs of Staff Chairman Mike Mullen, who held the position under both Republican and Democratic presidencies; and a Democratic foreign policy expert Susan Rice, who served as national security adviser to President Barack Obama.
Several British and European parliamentarians, along with former Irish President Mary Robinson, signed the document. Actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation, also lent their names.
“Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the upheaval that led to CEO Sam Altman’s temporary ouster in 2023. “But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that.”
Are worries about AI superintelligence also feeding AI hype?
The correspondence is expected to spark continued discussions within the AI research field concerning the probability of AI surpassing human intelligence, the technical routes to achieving this, and its potential dangers.
“In the past, it’s mostly been the nerds versus the nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. “I feel what we’re really seeing here is how the criticism has gone very mainstream.”
The wider discussions are made more complex because the very companies pursuing what some term superintelligence and others refer to as artificial general intelligence, or AGI, also occasionally exaggerate their products' abilities. This can boost marketability and has fueled worries about an AI bubble. Mathematicians and AI scientists recently mocked OpenAI after one of its researchers asserted that ChatGPT had solved complex mathematical issues, when in reality, it had merely located and compiled existing online information.
“There’s a ton of stuff that’s overhyped and you need to be careful as an investor, but that doesn’t change the fact that — zooming out — AI has gone much faster in the last four years than most people predicted,” Tegmark said.
Tegmark's team also initiated a letter in March 2023, during the early stages of the commercial AI surge, urging major tech firms to halt the creation of more advanced AI systems for a period. This plea went unheeded by the leading AI corporations. Notably, Elon Musk, the most recognized name on the 2023 letter, was simultaneously establishing his own AI venture, aiming to rival the very companies he advocated for a six-month moratorium.
Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. But didn’t expect them to sign.
“I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. Government just steps in.”
