• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks

By
Dylan Sloan
Dylan Sloan
Down Arrow Button Icon
By
Dylan Sloan
Dylan Sloan
Down Arrow Button Icon
May 21, 2024, 2:33 PM ET
U.K. Prime Minster Rishi Sunak
U.K. Prime Minster Rishi Sunak was one of a number of officials and AI executives who agreed to new commitments regarding responsible AI development at a summit in Seoul on Tuesday.Carl Court—Getty Images

There’s no stuffing AI back inside Pandora’s box—but the world’s largest AI companies are voluntarily working with governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators. Without strict legal provisions strengthening governments’ AI commitments, though, the conversations will only go so far.

Recommended Video

This morning, 16 influential AI companies including Anthropic, Microsoft, and OpenAI, 10 countries, and the EU met at a summit in Seoul to set guidelines around responsible AI development. One of the big outcomes of yesterday’s summit was AI companies in attendance agreeing to a so-called kill switch, or a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds. Yet it’s unclear how effective the policy actually could be, given that it fell short of attaching any actual legal weight to the agreement, or defining specific risk thresholds. Other AI companies not in attendance, or competitors to those that agreed in spirit to the terms, would not be subject to the pledge. 

“In the extreme, organizations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds,” read the policy paper the AI companies, including Amazon, Google, and Samsung, signed on to. The summit was a follow-up to last October’s Bletchley Park AI Safety Summit, which featured a similar lineup of AI developers and was criticized as “worthy but toothless” for its lack of actionable, near-term commitments to keep humanity safe from the proliferation of AI.

Following that earlier summit, a group of participants wrote an open letter criticizing the forum’s lack of formal rulemaking and AI companies’ outsize role in pushing for regulations in their own industry. “Experience has shown that the best way to tackle these harms is with enforceable regulatory mandates, not self-regulatory or voluntary measures,” reads the letter.

First in science fiction, and now in real life, writers and researchers have warned of the risks of powerful artificial intelligence for decades. One of the most recognized references is the “Terminator scenario,” the theory that if left unchecked, AI could become more powerful than its human creators and turn on them. The theory gets its name from the 1984 Arnold Schwarzenegger film, where a cyborg travels back in time to kill a woman whose unborn son will fight against an AI system slated to spark a nuclear holocaust.

“AI presents immense opportunities to transform our economy and solve our greatest challenges, but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology,” U.K. Technology Secretary Michelle Donelan said.

AI companies themselves recognize that their most advanced offerings wade into uncharted technological and moral waters. OpenAI CEO Sam Altman has said that artificial general intelligence (AGI), which he defines as AI that exceeds human intelligence, is “coming soon” and comes with risks attached.

“AGI would also come with serious risk of misuse, drastic accidents, and societal disruption,” reads an OpenAI blog post. “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

But so far, efforts to assemble global regulatory frameworks around AI have been scattered and have mostly lacked legislative authority. A UN policy framework asking countries to safeguard against AI risks to human rights, monitor personal data usage, and mitigate AI risks was unanimously approved last month, but it was nonbinding. The Bletchley Declaration, the centerpiece of last October’s global AI summit in the U.K., contained no tangible commitments regarding regulation. 

In the meantime, AI companies themselves have begun to form their own organizations pushing for AI policy. Yesterday, Amazon and Meta joined the Frontier Model Foundation, an industry nonprofit “dedicated to advancing the safety of frontier AI models,” according to its website. They join founding members Anthropic, Google, Microsoft, and OpenAI. The nonprofit has yet to put forth any firm policy proposals.

Individual governments have been more successful: Executives lauded President Biden’s executive order on regulating AI safety last October as “the first time where the government is ahead of things” for its inclusion of strict legal requirements that go beyond the vague commitments outlined in other similarly intentioned policies. Biden has invoked the Defense Production Act to mandate AI companies to share safety test results with the government, for example. The EU and China have also enacted formal policies dealing with topics such as copyright law and harvesting users’ personal data.

States have taken action, too: Colorado Gov. Jared Polis yesterday announced new legislation banning algorithmic discrimination in AI and requiring developers to share internal data with state regulators to ensure they’re complying.

This is far from the last chance for global AI regulation: France will host another summit early next year, following up on the meetings in Seoul and Bletchley Park. By then, participants say they will have drawn up formal definitions of what constitute risk benchmarks that would require regulatory action—a big step forward for what’s been a relatively timid process thus far. 

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Dylan Sloan
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.