• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechCoins2Day Global Tech Forum

Why Salesforce’s Chief Scientist Shut Down an AI Project That Identifies Human Emotions

Lucinda Shen
By
Lucinda Shen
Lucinda Shen
Down Arrow Button Icon
Lucinda Shen
By
Lucinda Shen
Lucinda Shen
Down Arrow Button Icon
November 29, 2018, 2:38 PM ET

While Salesforce has gone as far as to create an AI that can critique employees, it may be too early for an machine learning project aimed at reading human emotions for the firm.

Within the software company, a team of employees once sought to create an emotion classifying AI, Salesforce’s Chief Scientist Richard Socher said Coins2Day’s Global Tech Forum in Guangzhou Thursday. When Socher prodded the team about what data would be used to teach the AI, the employees revealed plans to use stock images.

“I already knew it was not going to work,” Socher said. “There will be very few examples of old people being happy, so the AI will probably say every person grumpy and say people of certain races are more angry. So basically we shut down that project for now because we really need to think about all different classes of people, communities, and minorities that are going to be impacted by the data.”

AI makes decisions based on what humans train them to do. Accordingly, developers have struggled with uprooting human biases in the realm of AI. Amazon reportedly spent years on a recruitment system that would automate the hiring process, but shuttered those plans in October when the technology began amplifying the prejudices of its human makers. Namely, the AI began favorin g male candidates over female candidates.

“If you make hiring decisions based on AI, and you have potentially racist or sexist hiring managers somewhere in your company, then their bias will be part of the data set that will be picked up by an AI algorithm,” said Socher. “I could’ve kind of predicted that that would not work.”

The bottomline, the question of how to train biases out of AI is one not easily answered. And perhaps AI should not be applied at all in certain situations.

“Sometimes it’s hard to get the bias out of your training data and you need to think about algorithmic ways to make sure the biases don’t get amplified,” he said. “And even then that might not be enough, and you really need to rethink whether you should employ certain features.”

About the Author
Lucinda Shen
By Lucinda Shen
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.