• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIResearch

Researchers from top AI labs including Google, OpenAI, and Anthropic warn they may be losing the ability to understand advanced AI models

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
July 22, 2025, 8:05 AM ET
OpenAI logo on a phone screen.
AI researchers from leading labs are warning that they could soon lose the ability to understand advanced AI reasoning models. Beata Zawrzel—NurPhoto via Getty Images
  • A group of 40 AI researchers, including contributors from OpenAI, GoogleDeepMind, Meta, and Anthropic, are sounding the alarm on the growing opacity of advanced AI reasoning models. In a new paper, the authors urge developers to prioritize research into “chain-of-thought” (CoT) processes, which provide a rare window into how AI systems make decisions. They are warning that as models become more advanced, this visibility could vanish.

AI researchers from leading labs are warning that they could soon lose the ability to understand advanced AI reasoning models.

Recommended Video

In a position paper published last week, 40 researchers, including those from OpenAI, Google DeepMind, Anthropic, and Meta, called for more investigation into AI reasoning models’ “chain-of-thought” process. Dan Hendrycks, an xAI safety advisor, is also listed among the authors.

The “chain-of-thought” process, which is visible in reasoning models such as OpenAI’s o1 and DeepSeek’s R1, allows users and researchers to monitor an AI model’s “thinking” or “reasoning” process, illustrating how it decides on an action or answer and providing a certain transparency into the inner workings of advanced models.

The researchers said that allowing these AI systems to “‘think’ in human language offers a unique opportunity for AI safety,” as they can be monitored for the “intent to misbehave.” However, they warn that there is “no guarantee that the current degree of visibility will persist” as models continue to advance.

The paper highlights that experts don’t fully understand why these models use CoT or how long they’ll keep doing so. The authors urged AI developers to keep a closer watch on chain-of-thought reasoning, suggesting its traceability could eventually serve as a built-in safety mechanism.

“Like all other known AI oversight methods, CoT [chain-of-thought] monitoring is imperfect and allows some misbehavior to go unnoticed. Nevertheless, it shows promise, and we recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods,” the researchers wrote.

“CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions. Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved,” they added.

The paper has been endorsed by major figures, including OpenAI co-founder Ilya Sutskever and AI godfather Geoffrey Hinton.

Reasoning Models

AI reasoning models are a type of AI model designed to simulate or replicate human-like reasoning—such as the ability to draw conclusions, make decisions, or solve problems based on information, logic, or learned patterns. Advancing AI reasoning has been viewed as a key to AI progress among major tech companies, with most now investing in building and scaling these models.

OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024, with competitors like xAI and Google following close behind.

However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.

Despite making big leaps in performance over the past year, AI labs still know surprisingly little about how reasoning actually unfolds inside their models. While outputs have improved, the inner workings of advanced models risk becoming increasingly opaque, raising safety and control concerns.

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Coins2Day’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Coins2Day's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.