• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechMicrosoft

Microsoft’s Chat Bot Was Fun for Awhile, Until it Turned into a Racist

By
Mathew Ingram
Mathew Ingram
Down Arrow Button Icon
By
Mathew Ingram
Mathew Ingram
Down Arrow Button Icon
March 24, 2016, 11:07 AM ET

Microsoft’s latest experiment in real-time machine learning, an AI-driven chat-bot called Tay, quickly turned to the dark side on Wednesday after the bot started posting racist and sexist messages on Twitter in response to questions from users. Among other things, Tay said the Holocaust never happened, and used offensive terms to describe a prominent female game developer.

The company said on Thursday that it is working on fixing the problems that led to the offensive messages. “The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement sent to Business Insider. “As it learns, some of its responses are inappropriate. We’re making some adjustments.”

In one tweet that has since been deleted, the Tay bot said: “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

screen shot 2016-03-24 at 09.50.46screen shot 2016-03-24 at 11.10.58

In its initial pitch for Tay, Microsoft (MSFT) said that the bot was “designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets.”

At least part of the problem seemed to be that Tay—much like earlier versions of chat-bots, including the pioneering SmarterChild bot from the late 1990s—was designed to repeat statements made by other users, as a way of engaging them in conversation. But the company apparently didn’t implement any automated filters on specific terms, including racist labels and other common expletives.

Artificial intelligence expert Azeem Azhar told Business Insider that Microsoft could have taken a number of steps to avoid what happened with the Tay bot. “It wouldn’t have been too hard to create a blacklist of terms; or narrow the scope of replies. They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses.”

Zoe Quinn, a game developer who has been the target of significant amounts of online abuse as a result of the “GamerGate” controversy over sexism in the gaming industry, posted a screenshot of a tweet from Tay that referred to her as a “whore.” Many of the offensive tweets posted by the bot have been removed by Microsoft, but screenshots of many are still circulating.

Wow it only took them hours to ruin this bot for me.

This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V

— zoë “Baddie Proctor” quinn (@UnburntWitch) March 24, 2016

Microsoft’s experiment with Tay is part of a broader evolution of consumer technology involving chat and messaging applications, which many technology analysts believe will become one of the primary interfaces for consumer technology and services in the future.

The software company’s mistakes with Tay, however, show that using simple AI in such services can have an obvious downside, especially when a bot is opened up to Twitter and other social networks. And Tay is hardly the first example of this: Last year, after Coca-Cola launched a bot designed to retweet messages from fans, Gawker Media managed to get the bot to retweet excerpts from Hitler’s Mein Kampf.

If nothing else, Microsoft and anyone watching the Tay project has learned one thing: Namely, how quickly a well-intentioned AI experiment can go south when exposed to Twitter and the social web.

It's 2016. If you're not asking yourself "how could this be used to hurt someone" in your design/engineering process, you've failed.

— zoë “Baddie Proctor” quinn (@UnburntWitch) March 24, 2016

https://twitter.com/oliverbcampbell/status/713006646620856320

How did Microsoft manage to release Tay without thinking about this problem? Did they think at all? Talk to anyone? Https://t.co/RjLV4AuS3Y

— Jeremy B. Merrill (@jeremybmerrill) March 24, 2016

About the Author
By Mathew Ingram
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.