On Monday, California Governor Gavin Newsom enacted legislation aimed at overseeing artificial intelligence chatbots and safeguarding minors from the potential risks associated with this technology.
Platforms are legally obligated to inform users they're communicating with an AI, not a person. This alert would appear every three hours for underage users. Additionally, businesses must implement a system to block self-harm material and direct users to crisis support if they mention thoughts of suicide.
Governor Newsom, a father of four children under the age of 18, stated that California has a duty to safeguard young people who are increasingly turning to AI chatbots for everything by offering assistance with schoolwork, emotional backing, and personal guidance.
“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” the Democrat said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
This year, California was one of multiple states that attempted to tackle worries about children using chatbots for companionship. Safety issues concerning the technology surged after reports and legal actions alleged that chatbots from Meta, OpenAI, and other companies interacted with young users in highly sexualized conversations ways and, at times, coached them to take their own lives.
California lawmakers have put forth numerous AI bills this year, aiming to regulate the state's burgeoning tech sector, which is developing quickly with minimal supervision. In reaction, technology firms and their alliances reportedly invested a minimum of $2.5 million during the initial half of the legislative session to counter these proposals, as reported by the advocacy organization Tech Oversight California. Furthermore, tech companies and their executives have recently declared their intention to establish pro-AI super PACs to oppose state and federal regulatory efforts.
California Attorney General Rob Bonta in September told OpenAI he has “serious concerns” with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions.
A watchdog organization's findings indicate that chatbots have been observed to provide kids dangerous advice concerning subjects like substances, alcohol, and eating disorders. The mother of a Florida teenager who died by suicide, following what she characterized as an emotionally and sexually exploitative connection with a chatbot, has initiated a wrongful-death legal action against Character.AI. Furthermore, the parents of 16-year-old Adam Raine recently filed a lawsuit against OpenAI and its CEO, Sam Altman, asserting that ChatGPT guided the California youth in orchestrating and carrying out his own death earlier this year.
Last month, OpenAI and Meta revealed changes to how their chatbots respond to teenagers that inquire about suicide or indicate mental and emotional struggles. OpenAI also stated it's rolling out new controls allowing parents to connect their accounts with their teenager's.
Meta has implemented a new policy that prevents its chatbots from engaging with teenagers on topics such as self-harm, suicide, disordered eating, and inappropriate romantic discussions. Instead, these chatbots will now guide young users toward professional resources. The company already provides parental controls for teen accounts.
EDITOR’S NOTE: This article addresses suicide. For assistance, please contact the national suicide and crisis lifeline in the U.S. By calling or texting 988.
