• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Newsletters

Using A.I. can be risky business

By
Jonathan Vanian
Jonathan Vanian
Down Arrow Button Icon
By
Jonathan Vanian
Jonathan Vanian
Down Arrow Button Icon
February 9, 2021, 12:44 PM ET

For years, companies have operated under the assumption that in order to improve their artificial intelligence software and gain a competitive advantage, they must gather enormous amounts of user data—the lifeblood of machine learning.

But increasingly, collecting massive amounts of user information can be a major risk. Laws like Europe’s General Data Protection Regulation, or GDPR, and California’s new privacy rules now impose heavy fines for companies that mishandle that data, such as failing to safeguard corporate IT systems from hackers. 

Some businesses are now even publicly distancing themselves from what used to be a standard practice, such as using machine learning to predict customer behavior. Alex Spinelli, the chief technologist for business software maker LivePerson, recently told Coins2Day, that he has cancelled some A.I. Projects at his current company and at previous employers because those undertakings conflicted with his own ethical beliefs about data privacy. 

For Aza Raskin, the co-founder and program advisor for the Center for Humane Technology non-profit, technology—and by extension A.I.—is experiencing a moment akin to climate change. 

Raskin, whose father, Jef Raskin, helped Apple develop its first Macintosh computers, said that for years researchers studied different environmental phenomena like the depletion of the ozone layer and rising sea levels. It took years before these different environmental issues coalesced into what we now call climate change, a catch-all term that helps people understand the world’s current crisis. 

In the same way, researchers have been studying some of A.I.’s unintended consequences related to the proliferation of misinformation and surveillance. The pervasiveness of these problems, like Facebook allowing disinformation to spread on its service or the Chinese government’s use of A.I. To track Uighurs, could be leading to a societal reckoning over A.I.-powered technology.

“Even five years ago, if you stood up and said, ‘Hey social media is driving us to increase polarization and civil war,’ people would eye roll and call you a Luddite,” Raskin said. But with the recent U.S. Capitol riots, led by people who believed conspiracy theories shared on social media, it’s becoming harder to ignore the problems of A.I. And related technology, he said.

Raskin, who is also a member of the World Economic Forum’s Global A.I. Council, hopes that governments will create regulations that spell out how businesses can use A.I. Ethically.

“We need government protections so we don’t have unfettered capitalism pointing at the human soul,” he said.

He believes that companies that take data privacy seriously will have a “strategic advantage” over others as more A.I. Problems emerge, which could result in financial penalties or damaged reputations. 

Companies should expand their existing risk assessments—which help businesses measure the legal, political, and strategic risks associated with certain corporate practices—to include technology and A.I., Raskin said. 

The recent Capitol riots underscore how technology can lead to societal problems, which in the long run can hurt a company’s ability to succeed. (After all, it can be difficult to run a successful business during a civil war.)

“If you don’t have a healthy society, you can’t have successful business,” Raskin said.


Jonathan Vanian 
@JonathanVanian
[email protected]

A.I. IN THE NEWS

Arm wrestling. Graphcore, a Microsoft-backed startup specializing in A.I. Computer chips, is objecting to Nvidia’s  proposed $40 billion purchase of semiconductor licensing firm Arm Holdings, CNBC reported. The article quoted Hermann Hauser, whose firm Amadeus Capital  invests in Graphcore, as saying, “If Nvidia can merge the Arm and Nvidia designs in the same software then that locks out companies like Graphcore from entering the seller market and entering a close relationship with Arm.” A Nvidia spokesperson said, however, that the deal is “pro-competitive.”

Don’t scrape faces in Canada. The Canadian government has deemed the facial-recognition software sold by Clearview AI as illegal and wants the startup to delete photos of Canadian citizens from its database of human faces, The New York Times reported. Commissioner Daniel Therrien said that Clearview AI allows for “mass surveillance” and puts society “continually in a police lineup.” Clearview AI objects to the determination and a corporate lawyer for the company said the startup “only collects public information from the Internet which is explicitly permitted,” the report said.

Sloppy data in healthcare A.I. An investigation by health news service STAT found that the Federal Drug Administration  has cleared over 160 medical A.I. Products “based on widely divergent amounts of clinical data and without requiring manufacturers to publicly document testing on patients of different genders, races, and geographies.” Regarding ten A.I. Products used for breast imaging, the report found that “only one publicly disclosed the racial demographics of the dataset used to detect suspicious lesions and assess cancer risk.” 

Big money in big data. The startup Databricks  said it closed a $1 billion funding round and now has a private valuation of $28 billion, VentureBeat reported. What’s noteworthy about the funding round: Cloud computing rivals Amazon,Microsoft, and Google  all participated, underscoring the startup’s popularity with companies using its technology across multiple cloud computing vendors.

EYE ON A.I. TALENT

Bowery Farming  has hired Injong Rhee  to be the indoor farming startup’s chief technology officer. Coins2Day’s  Aaron Pressman reported on the hiring, explaining that Rhee, who worked at Google  and Samsung, “will focus on improving Bowery’s computer-vision system and other sensors that analyze when plants need water and nutrients, while also looking to apply the company’s accumulated historical data to new problems.”

EYE ON A.I. RESEARCH

How A.I. Can predict COVID-19 mortality. Researchers from institutions like Massachusetts General Hospital, Harvard Medical School, and University of Sydney  published a paper in Nature  about using machine learning to predict the most likely predictors of COVID-19 mortality from electronic health records. The researchers found that age was “the most important predictor of mortality in COVID-19 patients,” with other factors including a history of pneumonia and diabetes as other important risk factors.

The Boston Globe reported on the research and discussed its importance with one of the paper’s co-authors:

“If we can predict [mortality] so well, based off of all these features that happen before individuals even get sick, this can really be applied in ways that I think are novel for an algorithm like this,” said Dr. Zachary Strasser, one of the study’s lead researchers, along with Hossein Estiri, an assistant professor of medicine at MGH and Harvard. “We can really think about who needs to get prioritized for limited resources, because these are the people that are probably going to do worse.”

FORTUNE ON A.I.

IBM unveils ambitious plan for quantum computing software—By Jeremy Kahn

Who is Amazon’s new CEO, Andy Jassy?—By Jonathan Vanian

TikTok takes on the mess that is misinformation— By Danielle Abril

Nvidia says its $40 billion Arm takeover is ‘proceeding as planned’ despite antitrust regulator pile-on—By David Meyer

Chinese short-video app Kuaishou jumps nearly 200% in its Hong Kong debut— By Naomi Xu Elegant

How mental-health crisis centers have tried to weather the COVID-19 storm— By Jonathan Vanian

BRAIN FOOD

Context matters. To prevent A.I.-powered language systems from spewing offensive words that aren’t appropriate for work, researchers use a list known as LDNOOBW, or a List of Dirty, Naughty, Obscene, and Otherwise Bad Words. In theory, this list of bad words acts as a guidepost for A.I. Language systems to avoid offending people. But, as Wired reports, A.I. Systems that have incorporated this list have ended up producing unintended consequences. In one case, a chat software called Rocket.Chat  censored “attendees of an event called Queer in AI from using the word queer.”

From the article:

Words on the list are many times used in very offensive ways but they can also be appropriate depending on context and your identity,” says William Agnew, a machine learning researcher at the University of Washington. He is a cofounder of the community group Queer in AI, whose web pages on encouraging diversity in the field would likely be excluded from Google’s AI primer for using the word sex on pages about improving diversity in the AI workforce. LDNOOBW appears to reflect historical patterns of disapproval of homosexual relationships, Agnew says, with entries including “gay sex” and “homoerotic.”

About the Author
By Jonathan Vanian
LinkedIn iconTwitter icon

Jonathan Vanian is a former Coins2Day reporter. He covered business technology, cybersecurity, artificial intelligence, data privacy, and other topics.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.