• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Why Robots Could Soon Be Sexist

By
Michael Litt
Michael Litt
and
Bethany Cianciolo
Bethany Cianciolo
Down Arrow Button Icon
By
Michael Litt
Michael Litt
and
Bethany Cianciolo
Bethany Cianciolo
Down Arrow Button Icon
October 18, 2017, 4:06 PM ET

Google the word “doctor,” and you’ll see thousands of pictures of men. If you’re a woman looking for a job, you’re less likely to see targeted ads for high-paying roles than your male counterparts. And, if you were in the position of asking Siri when it first launched, “Where can I find emergency contraception?” She wouldn’t have known what to tell you.

All of these results are powered, in one form or another, by what we call artificial intelligence—complex algorithms that learn from huge data sets, then produce their own conclusions. An aura of objectivity and neutrality has traditionally surrounded AI. But the reality is that it’s built and programmed by humans, who definitely aren’t perfect, and it “learns” from human behavior. When that community of human programmers is predominantly male (and, more specifically, predominantly white and male), we can wind up, whether intentionally or not, with a system that can replicate unconscious bias.

There are countless examples of how bias has infected AI, with unfortunate results: chatbots that turned anti-semitic within 24 hours of launching; crime prevention software that turned out to be biased against African-Americans; Nikon’s camera having trouble auto-detecting Asian skin tones.

And this is just the beginning. In the years ahead, AI will grow in sophistication and expand across industries, becoming nearly ubiquitous. The tech industry has slowly begun to recognize the impact of lack of diversity inside its offices. It’s time to acknowledge these very same influences in our software.

So how can we begin to correct course? It can start with a name. It may seem innocuous, but how we name AI—or choosing to name it at all—matters. It seems standard to give virtual assistants female names or voices (just look at Alexa, Cortana, Bixby, or Siri in North America), but there’s no practical reason to do so, as it just perpetuates stereotypes about women as the chipper, helpful assistant. Fortunately, it looks like the tide is starting to change. Google declined to give its “OK Google” virtual assistant a name at all.

Equally important is ensuring a diverse data set from day one of programming. AI learns how to use its algorithm from a training set—a batch of photos, a database, or collection of relevant numbers that lays the groundwork for its functionality. But if that training set is skewed in some way, that’s what the AI learns is normal: What it spits out is a reflection of what data has been put in. One real-world example that we’re already struggling with is health care AI systems that incorrectly diagnose medical problems based on a standard of white male symptoms.

Vigilance by consumers is also critical. A number of “watchdog” organizations like AI Now are already popping up to start the fight. In the future, a community policing model could make a difference on a grassroots level—giving users creative ways to find problems and report them—as could internal auditing. In fact, special positions like bias detectors and algorithm analysts might one day be a standard at every company.

Ultimately, however, reducing bias in AI comes down to something as obvious as it is hard to achieve: having a diverse team building AI. Yes, there’s currently an underrepresentation of women in AI (and in STEM and IT in general), but it’s certainly possible to cultivate diverse teams, provided the right strategies are put in place.

While this may require more upfront energy during recruiting, the payoff is enormous (culturally, financially, and otherwise). With a diverse representation of gender (and, ideally, education, age, race, and other factors), it’s possible to naturally neutralize biases that you might not even know to look for, and bring a critical eye to the rest.

Michael Litt is cofounder and CEO of the video marketing platformVidyard. Follow him on Twitter at @michaellitt.

About the Authors
By Michael Litt
See full bioRight Arrow Button Icon
By Bethany Cianciolo
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.