AI chatbots are facing scrutiny over mental health risks associated with users forming relationships with the technology or using them for therapy or receiving support during severe mental health crises. As companies address criticism from users and experts, a prominent new leader at OpenAI states this concern is a top priority in her responsibilities.
TL;DR
- OpenAI's new Applications CEO, Fidji Simo, prioritizes mental health risks from AI chatbots.
- Simo aims to proactively address societal risks, unlike past experiences at Meta.
- Studies show ChatGPT users exhibit psychosis, mania, or suicidal intent weekly.
- OpenAI is developing parental controls and refining models to mitigate emerging safety challenges.
In May, Fidji Simo, an alum of Meta, was hired as OpenAI’s CEO of Applications. Her responsibilities included overseeing all operations beyond CEO Sam Altman's purview concerning research and the computing infrastructure for the company's AI models. In a Wired interview released on Monday, she highlighted a significant difference between her tenure at the tech firm led by Mark Zuckerberg and her current role at Altman's company.
“I would say the thing that I don’t think we did well at Meta is actually anticipating the risks that our products would create in society,” Simo told Wired. “At OpenAI, these risks are very real.”
Meta did not respond immediately to Coins2Day’s request for comment.
Simo spent ten years at Meta, then called Facebook, from 2011 until July 2021. For her final two and a half years there, she was in charge of the Facebook application.
In August 2021, Simo became CEO of grocery delivery service Instacart. She helmed the company for four years before joining one of the world’s most valuable startups as its secondary CEO in August.
Simo's initial efforts at OpenAI focused on mental health, the 40-year-old shared with Wired. Additionally, she was responsible for introducing the company's AI certification program, aimed at enhancing employees' AI proficiencies in a challenging job landscape and mitigating AI's impact within the organization.
“So it is a very big responsibility, but it’s one that I feel like we have both the culture and the prioritization to really address up-front,” Simo said.
Upon joining the tech giant, Simo stated that a glance at the situation made it clear to her that mental health required attention.
An increasing number of individuals have fallen prey to what's occasionally termed AI psychosis. Specialists worry that AI assistants such as ChatGPT might exacerbate users' delusions and paranoia, resulting in hospitalizations, marital breakdowns, or fatalities.
An OpenAI company audit by peer-reviewed medical journal BMJ released in October revealed hundreds of thousands of ChatGPT users exhibit signs of psychosis, mania, or suicidal intent every week.
A recent Brown University study revealed that as individuals increasingly seek mental health guidance from ChatGPT and similar large language models, these systems systematically breach ethical standards for mental health care set forth by bodies such as the American Psychological Association.
Simo said she must navigate an “uncharted” path to address these mental health concerns, adding there’s an inherent risk to OpenAI constantly rolling out different features.
“Every week new behaviors emerge with features that we launch where we’re like, ‘Oh, that’s another safety challenge to address,’” Simo told Wired.
Simo has nevertheless guided the company's recent launch of parental controls for ChatGPT accounts aimed at teenagers, and it's been noted that OpenAI is developing “age prediction to protect teens.”. Additionally, Meta has indicated plans to moved implement parental controls by the beginning of next year.
"However, consistently acting ethically presents a significant challenge," Simo noted, attributing this difficulty to the immense user base, which numbers 800 million weekly. “So what we’re trying to do is catch as much as we can of the behaviors that are not ideal and then constantly refine our models.”
