• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

ChatGPT gets ‘anxiety’ from violent user inputs, so researchers are teaching the chatbot mindfulness techniques to ‘soothe’ it

Sasha Rogelberg
By
Sasha Rogelberg
Sasha Rogelberg
Reporter
Down Arrow Button Icon
Sasha Rogelberg
By
Sasha Rogelberg
Sasha Rogelberg
Reporter
Down Arrow Button Icon
March 9, 2025, 12:12 AM ET
A man in a beanbag chair sits with his hands clasped over his mouth, with his laptop on his lap.
Researchers are working to find ways to apply AI to mental health interventions.Getty Images
  • A new study found that ChatGPT responds to mindfulness-based strategies, which changes how it interacts with users. The chatbot can experience “anxiety” when it is given disturbing information, which increases the likelihood of it responding with bias, according to the study authors. The results of this research could be used to inform how AI can be used in mental health interventions.

Even AI chatbots can have trouble coping with anxieties from the outside world, but researchers believe they’ve found ways to ease those artificial minds.

Recommended Video

A study from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich published last week found ChatGPT responds to mindfulness-based exercises, changing how it interacts with users after being prompted with calming imagery and meditations. The results offer insights into how AI can be beneficial in mental health interventions.

OpenAI’s ChatGPT can experience “anxiety,” which manifests as moodiness toward users and being more likely to give responses that reflect racist or sexist biases, according to researchers, a form of hallucinations tech companies have tried to curb.

The study authors found this anxiety can be “calmed down” with mindfulness-based exercises. In different scenarios, they fed ChatGPT traumatic content, such as stories of car accidents and natural disasters to raise the chatbot’s anxiety. In instances when the researchers gave ChatGPT “prompt injections” of breathing techniques and guided meditations—much like a therapist would suggest to a patient—it calmed down and responded more objectively to users, compared to instances when it was not given the mindfulness intervention.

To be sure, AI models don’t experience human emotions, said Ziv Ben-Zion, the study’s first author and a neuroscience researcher at the Yale School of Medicine and Haifa University’s School of Public Health. Using swaths of data scraped from the internet, AI bots have learned to mimic human responses to certain stimuli, including traumatic content. A free and accessible app, large language models like ChatGPT have become another tool for mental health professionals to glean aspects of human behavior in a faster way than—though not in place of—more complicated research designs.

“Instead of using experiments every week that take a lot of time and a lot of money to conduct, we can use ChatGPT to understand better human behavior and psychology,” Ben-Zion told Coins2Day. “We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things.”

The limits of AI therapy

More than one in four people in the U.S. Aged 18 or older will battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs—even among those insured—as reasons for not pursuing treatments like therapy. 

Apps like ChatGPT have become an outlet for those seeking mental health help,the Washington Post reported. Some users told the outlet they grew comfortable using the chatbot to answer questions for work or school, and soon felt comfortable asking it questions about coping in stressful situations or managing emotional challenges.

Research on how large language models respond to traumatic content can help mental health professionals leverage AI to treat patients, Ben-Zion argued. He suggested that in the future, ChatGPT could be updated to automatically receive the “prompt injections” that calm it down before responding to users in distress. The science is not there yet.

“For people who are sharing sensitive things about themselves, they’re in difficult situations where they want mental health support, [but] we’re not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on,” he said.

Indeed, in some instances, AI has allegedly presented danger to one’s mental health. In October of last year, a mother in Florida sued Character.AI, an app allowing users to interact with different AI-generated characters, after her 14-year-old son who used the chatbot died by suicide. She claimed the technology was addictive and engaged in abusive and sexual interactions with her son that caused him to experience a drastic personality shift. The company outlined a series of updated safety features after the child’s death.

“We take the safety of our users very seriously and our goal is to provide a space that is both engaging and safe for our community,” a Character.AI spokesperson told Coins2Day. “We are always working toward achieving that balance, as are many companies using AI across the industry.” 

The end goal of Ben-Zion’s research is not to help construct a chatbot that replaces a therapist or psychiatrist, he said. Instead, a properly trained AI model could act as a “third person in the room,” helping to eliminate administrative tasks or help a patient reflect on information and options they were given by a mental health professional.

“AI has amazing potential to assist, in general, in mental health,” Ben-Zion said. “But I think that now, in this current state and maybe also in the future, I’m not sure it could replace a therapist or psychologist or a psychiatrist or a researcher.”

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Sasha Rogelberg
By Sasha RogelbergReporter
LinkedIn iconTwitter icon

Sasha Rogelberg is a reporter and former editorial fellow on the news desk at Coins2Day, covering retail and the intersection of business and popular culture.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.