• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

Doctors who used AI assistance in procedures became 20% worse at spotting abnormalities on their own, study finds, raising concern about overreliance

Sasha Rogelberg
By
Sasha Rogelberg
Sasha Rogelberg
Reporter
Down Arrow Button Icon
Sasha Rogelberg
By
Sasha Rogelberg
Sasha Rogelberg
Reporter
Down Arrow Button Icon
August 26, 2025, 7:00 AM ET
A doctor in an operating room looks at a laptop. Other doctors are gathered in the background.
New research indicates endoscopists introduced to AI tools performed procedures less effectively when those tools were no longer available.Getty Images
  • Endoscopists introduced to AI-assistance tools during colonoscopies had a lower rate of detecting abnormalities after having said tools taken away, according to a study published this month in the Lancet Gastroenterology & Hepatology journal. Dr. Marcin Romańczyk, who conducted the study, said the results were a surprise, and he speculated that the decrease in detection rates were, in part, a result of overreliance on AI. In critical sectors like aviation where lives are at stake, there has been previous evidence of professionals relying too much on automation at the expense of safety.

Artificial intelligence may be a promising way to boost workplace productivity, but leaning on the technology too hard may prevent professionals from keeping their own skills sharp. More specifically, it sounds like AI might be making some doctors worse at detecting irregularities during routine screenings, new research finds, raising concerns about specialists relying too much on the technology.

Recommended Video

A study published in the Lancet Gastroenterology & Hepatology journal this month found that in 1,443 patients who underwent colonoscopies with and without AI-assisted systems, endoscopists introduced to an AI-assistance system went from detecting potential polyps at a rate of 28.4% with the technology to 22.4% after they no longer had access to the AI tools they were introduced to—a 20% drop in detection rates. 

The doctors’ failure to detect as many polyps on the colon when they were no longer using AI assistance was a surprise to Dr. Marcin Romańczyk, a gastroenterologist at H-T. Medical Center in Tychy, Poland, and the study’s author. The results not only call into question a potential laziness developing as a result of an overreliance on AI, but also the changing relationship between medical practitioners and a longstanding tradition of analog training.

“We were taught medicine from books and from our mentors. We were observing them. They were telling us what to do,” Romańczyk said. “And now there’s some artificial object suggesting what we should do, where we should look, and actually we don’t know how to behave in that particular case.”

Beyond the increased use of AI in operating rooms and doctors offices, the proliferation of automation in the workplace has brought with it lofty hopes of enhancing workplace performance. Goldman Sachs predicted last year the technology could increase productivity by 25%. However, emerging research has also warned of the pitfalls of adopting AI tools without consideration of its negative effects. A study from Microsoft and Carnegie Mellon University earlier this year found that among surveyed knowledge workers, AI increased work efficiency, but reduced critical engagement with content, atrophying judgment skills.

Romańczyk’s study contributes to this growing body of research questioning humans’ ability to use AI without compromising their own skillset. In his study, AI systems helped identify polyps on the colon by putting a green box around the region where an abnormality would be. To be sure, Romańczyk and his team did measure why endoscopists behaved this way because they did not anticipate this outcome and therefore did not collect data on why this happened. 

Instead, Romańczyk speculates that endoscopists became so used to looking for the green box that when the technology was no longer there, the specialists did not have that cue to pay attention to certain areas. He called this the “Google Maps effect,” likening his research results to the changes drivers made transitioning from the era of paper maps to that of GPS: Many people now rely on automation to show the most efficient route, when 20 years ago, one had to find out that route for themselves.

Checks and balances on AI

The real-life consequences of automation atrophying human critical skills are already well-established.

In 2009, Air France Flight 447 en route from Rio de Janeiro to Paris fell into the Atlantic Ocean, killing all 228 passengers and flight crew members on board. An investigation found the plane’s autopilot had been disconnected, ice crystals had disrupted its airspeed sensors, and the aircraft’s automated “flight director” was giving inaccurate information. The flight personnel, however, were not effectively trained in how to fly manually in these conditions and took the automated flight director’s faulty directions instead of making the appropriate corrections. The Air France accident is one of several in which humans were not property trained, relying instead on automated aircraft features.

“We are seeing a situation where we have pilots that can’t understand what the airplane is doing unless a computer interprets it for them,” William Voss, president of the Flight Safety Foundation, said at the time of the Air France investigation. “This isn’t a problem that is unique to Airbus or unique to Air France. It’s a new training challenge that the whole industry has to face.”

These incidents bring periods of reckoning, particularly for critical sectors where human lives are at stake, according to Lynn Wu, associate professor of operations, information, and decisions at University of Pennsylvania’s Wharton School. While industries should be leaning into technology, she said, the onus to make sure humans are appropriately adopting it should be on the institutions. 

“What is important is that we learn from this history of aviation and the prior generation of automation, that AI absolutely can boost performance,” Wu told Coins2Day. “But at the same time, we have to maintain those critical skills, such that when AI is not working, we know how to take over.”

Similarly, Romańczyk doesn’t eschew the presence of AI in medicine. 

“AI will be, or is, part of our life, whether we like it or not,” he said. “We are not trying to say that AI is bad and [to stop using] it. Rather, we are saying we should all try to investigate what’s happening inside our brains, how we are affected by it? How can we actually effectively use it?”

If professionals and specialists want to continue to use automation to enhance their work, it behooves them to retain their set of critical skills, Wu said. AI relies on human data to train itself, meaning if its training is faulty, so, too, will be its output.

“Once we become really bad at it, AI will also become really bad,” Wu said. “We have to be better in order for AI to be better.”

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Sasha Rogelberg
By Sasha RogelbergReporter
LinkedIn iconTwitter icon

Sasha Rogelberg is a reporter and former editorial fellow on the news desk at Coins2Day, covering retail and the intersection of business and popular culture.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.