• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechOpenAI

Former OpenAI board member reveals why Sam Altman was fired in bombshell interview—‘we learned about ChatGPT on Twitter’

Christiaan Hetzner
By
Christiaan Hetzner
Christiaan Hetzner
Senior Reporter
Down Arrow Button Icon
Christiaan Hetzner
By
Christiaan Hetzner
Christiaan Hetzner
Senior Reporter
Down Arrow Button Icon
May 29, 2024, 10:59 AM ET
Former OpenAI non-profit board member Helen Toner
Former OpenAI non-profit board member Helen Toner took aim at Sam Altman, who she briefly ousted as CEO last November.Jerod Harris—Getty Images for Vox Media

One of the ringleaders behind the brief, spectacular, but ultimately unsuccessful coup to overthrow Sam Altman accused the OpenAI boss of repeated dishonesty in a bombshell interview that marked her first extensive remarks since November’s whirlwind events.

Recommended Video

Helen Toner, an AI policy expert from Georgetown University, sat on the nonprofit board that controlled OpenAI from 2021 until she resigned late last year following her role in ousting Altman. After staff threatened to leave en masse, he returned empowered by a new board with only Quora CEO Adam D’Angelo remaining from the original four plotters. 

Toner disputed speculation that she and her colleagues on the board had been frightened by a technological advancement. Instead she blamed the coup on a pronounced pattern of dishonest behavior by Altman that gradually eroded trust as key decisions were not shared in advance.   

“For years, Sam had made it very difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she told The TED AI Show in remarks published on Tuesday.

Even the very launch of ChatGPT, which sparked the generative AI frenzy when it debuted in November 2022, was withheld from the board, according to Toner. “We learned about ChatGPT on Twitter,” she said.

Sharing this, recorded a few weeks ago. Most of the episode is about AI policy more broadly, but this was my first longform interview since the OpenAI investigation closed, so we also talked a bit about November.

Thanks to @bilawalsidhu for a fun conversation! Https://t.co/h0PtK06T0K

— Helen Toner (@hlntnr) May 28, 2024

Toner claimed Altman always had a convenient excuse at hand to downplay the board’s concerns, which is why for so long no action had been taken. 

“Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or it was misinterpreted or whatever,” she continued. “But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us and that’s a completely unworkable place to be in as a board.”

OpenAI did not respond to a request by Coins2Day for comment.

Things ultimately came to a head, Toner said, after she co-published a paper in October of last year that cast Anthropic’s approach to AI safety in a better light than OpenAI, enraging Altman.

“The problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board, so it was another example that just like really damaged our ability to trust him,” she continued, adding that the behavior coincided with discussions in which the board was “already talking pretty seriously about whether we needed to fire him.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

Taken in isolation, those and other disparaging remarks Toner leveled at Altman could be downplayed as sour grapes from the ringleader of a failed coup. The pattern of dishonesty she described comes, however, on the wings of similarly damaging accusations from a former senior AI safety researcher, Jan Leike, as well as Scarlett Johansson. 

Attempts to self-regulate doomed to fail

The Hollywood actress said Altman approached her with the request to use her voice for its latest flagship product—a ChatGPT voice bot that users can converse with, reminiscent of the fictional character Johansson played in the movie Her. When she refused, she suspects, he may have blended in part of her voice, violating her wishes. The company disputes her claims but agreed to pause its use anyway.

We’re really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.

First, we have… https://t.co/djlcqEiLLN

— Greg Brockman (@gdb) May 18, 2024

Leike, on the other hand, served as joint head of the team responsible for creating guardrails that ensure mankind can control hyperintelligent AI. He left this month, saying it had become clear to him that management had no intention of diverting valuable resources to his team as promised, leaving a scathing rebuke of his former employer behind in his wake. (On Tuesday he joined the same OpenAI rival Toner had praised in October, Anthropic.)

Once key members of its AI safety staff had scattered to the wind, OpenAI disbanded the team entirely, unifying control in the hands of Altman and his allies. Whether those in charge of maximizing financial results are best entrusted with implementing guardrails that may prove a commercial hindrance remains to be seen.

Although certain staffers were having their doubts, few outside of Leike chose to speak up. Thanks to reporting by Vox earlier this month, it emerged that a key motivating factor behind that silence was an unusual nondisparagement clause that, if broken, would void an employee’s vesting equity in perhaps the hottest startup in the world.

When I left @OpenAI a little over a year ago, I signed a non-disparagement agreement, with non-disclosure about the agreement itself, for no other reason than to avoid losing my vested equity. (Thread)

— Jacob Hilton (@JacobHHilton) May 24, 2024

This followed earlier statements by former OpenAI safety researcher Daniel Kokotajlo that he voluntarily sacrificed his share of equity in order not to be bound by the exit agreement. Altman later confirmed the validity of the claims.

“Although we never clawed anything back, it should never have been something we had in any documents or communication,” he posted earlier this month. “This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.”

Toner’s comments come fresh on the heels of her op-ed in the Economist, in which she and former OpenAI director Tasha McCauley argued that no AI company could be trusted to regulate itself as the evidence showed.

In regards to recent stuff about how openai handles equity:

we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). Vested equity is vested equity, full stop.

There was…

— Sam Altman (@sama) May 18, 2024

“If any company could have successfully governed itself while safely and ethically developing advanced AI systems it would have been OpenAI,” they wrote. “Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Christiaan Hetzner
By Christiaan HetznerSenior Reporter
Instagram iconLinkedIn iconTwitter icon

Christiaan Hetzner is a former writer for Coins2Day, where he covered Europe’s changing business landscape.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.