• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Techfacial recognition

IBM pulls out of facial recognition, fearing racial profiling and mass surveillance

By
David Meyer
David Meyer
Down Arrow Button Icon
By
David Meyer
David Meyer
Down Arrow Button Icon
June 9, 2020, 6:28 AM ET

IBM has pulled out of the facial recognition game—the boldest move yet by a Big Tech firm to repudiate the discriminatory use of the technology.

The company announced late Monday that it was no longer offering general purpose facial recognition or analysis software, in a letter to senators and members of Congress.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency,” wrote CEO Arvind Krishna.

IBM has not been the only company in the space to see a need for caution when deploying facial recognition technology, particularly in contexts in which its use might lead to the infringement of people’s rights.

In March, following controversy over a deployment on Israel’s border with the West Bank, Microsoftsaid it would no longer take minority stakes in companies selling facial recognition systems. Its venture arm said Microsoft would instead focus on “commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.” Microsoft chief legal officer Brad Smith has repeatedly called for greater regulation of the technology, and the company has, on at least one occasion, refused to sell its facial recognition system to a U.S. Law enforcement agency.

Krishna’s letter addressed lawmakers including those Democrats who on Monday introduced a bill that would reform police rules, in order to combat misconduct and racial discrimination. The legislation was unveiled two weeks after the death of George Floyd, a black man killed by a white police officer in Minneapolis.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna wrote. “Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of A.I. Systems have a shared responsibility to ensure that A.I. Is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.”

Krishna also backed elements of the new bill such as the creation of a federal registry of police misconduct and measures that increase police accountability—including, in his words, “modern data analytics techniques.”

Bias is a hot topic in the A.I. Community, particularly regarding facial recognition. Researchers often find that such systems are more likely to misidentify people with darker skin, and activists say this fosters discrimination. This issue, along with privacy fears, led the University of California at Los Angeles to drop its campus-wide facial recognition plans a few months ago.

And as the activist and research organization Algorithmic Justice League noted last week, facial recognition’s use by police is a particularly live issue at this juncture—thanks to deployments at protests over police brutality.

“The use of facial recognition technology for surveillance…gives the police a powerful tool that amplifies the targeting of Black lives,” wrote the group’s Joy Buolamwini, Aaina Agarwal, Nicole Hughes, and Sasha Costanza-Chock. “Not only are Black lives more subject to unwarranted, rights-violating surveillance, they are also more subject to false identification, giving the government new tools to target and misidentify individuals in connection with protest-related incidents.”

IBM tried to combat the misidentification problem last year by releasing a data set containing a million diverse faces, in order to better train facial recognition systems. Its own systems have certainly come in for criticism over the years. Buolamwini, an MIT researcher, found in 2018 that IBM Watson’s facial recognition system had an error rate of 0.3% for identifying lighter male faces, while for darker-skinned females the error rate was as high as 34.7%.

About the Author
By David Meyer
LinkedIn icon
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.