• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Microsoft may limit how long people can talk to its ChatGPT-powered Bing because the A.I. bot gets emotional if it works for too long

Prarthana Prakash
By
Prarthana Prakash
Prarthana Prakash
Europe Business News Reporter
Prarthana Prakash
By
Prarthana Prakash
Prarthana Prakash
Europe Business News Reporter
February 17, 2023, 5:25 AM ET
A picture of Yusuf Mehdi
Microsoft is considering placing limits on its A.I.-powered search engine, Bing.Jason Redmond—AFP/Getty Images

Microsoft’s A.I.-powered chatbot, Bing, is just over a week old, and users think it’s getting a bit moody. Media outlets report that Bing’s A.I. Is responding to prompts with human-like emotions of anger, fear, frustration, and confusion. In one such exchange reported bythe Washington Post, the bot said it felt “betrayed and angry” when the user identified as a journalist after asking Bing A.I. Several questions.

It turns out that Bing’s A.I., if you talk to it long enough, can start to get a bit testy. 

Microsoft is considering imposing limits on how long people can interact with Bing’s A.I., reports the New York Times, closing off conversations before the chatbot gets confused and starts responding to the user’s tone. The company is also considering other guardrails to stop the program from giving strange and unnerving answers.

Some other features that the Redmond, Wash.–based company is experimenting with include allowing users to restart conversations and customizing the tone of the interaction, according to the New York Times.

“One area where we are learning a new use case for chat is how people are using it as a tool for more general discovery of the world, and for social entertainment,” Microsoft said in a statement Wednesday. The company said it didn’t “entirely envision” such uses for the chatbot.

Users are reporting other mistakes made by Bing’s A.I., including instances of it responding to users in the past tense for future events, failing to answer basic questions about the current year, and giving incorrect answers to financial calculations.

“We’ve updated the service several times in response to user feedback, and per our blog are addressing many of the concerns being raised, to include the questions about long-running conversations,” a spokesperson from Microsoft told Coins2Day, adding that 90% of conversations with the Bing chatbot have been less than 15 messages.

Microsoft’s earlier experiments with chatbots were also mired in controversy. In 2016, the company introduced a chatbot called Tay. Microsoft withdrew the tool within days of launching it, after the bot spewed offensive language and racist bile when users played with it.

Bing A.I., which is powered by OpenAI, the parent company of the much-talked-about ChatGPT, was launched last week as a new and improved version of Microsoft’s search engine. The announcement came just days after Google unveiled its chatbot, Bard. Both Google and Microsoft have since been called out for featuring factual errors in their A.I. Demos. 

ChatGPT, the generative A.I. Tool that launched late last November, went viral as people experimented with using it for tasks like speech writing and test-taking. But it hasn’t been free of errors. Users have caught it producing biased or poorly sourced responses to questions from users. Tech leaders have sounded alarm bells about the mistakes these bots can make, and how interactions with ChatGPT-like platforms could yield “convincing but completely fictitious answers.”

In response to those concerns, OpenAI, which will receive a $10 billion investment from Microsoft, announced that it was upgrading ChatGPT so that users could customize it to curb its biases.

“This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” the startup said in a statement Thursday. The customization will involve “striking the right balance” between allowing users to adjust the chatbot’s behavior and staying within the system’s limits and moderations.

“In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should,” OpenAI wrote.

Update, February 17, 2023: This article has been updated with a statement from Microsoft.

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
About the Author
Prarthana Prakash
By Prarthana PrakashEurope Business News Reporter
LinkedIn icon

Prarthana Prakash was a Europe business reporter at Coins2Day.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.