Over the weekend, Andrej Karpathy, a foundational member of OpenAI and a highly influential figure in contemporary artificial intelligence, delivered a stark evaluation of the sector that caused considerable stir. Advancement in the pursuit of artificial general intelligence (AGI).
TL;DR
- OpenAI cofounder Andrej Karpathy believes AGI is at least ten years away, contrary to industry hype.
- Karpathy argues many firms overstate AI's autonomous capabilities, leading to potential industry harm.
- He states current AI models are amazing but still require significant work and lack reliability.
During a widely circulated interview with podcaster Dwarkesh Patel, a YouTuber boasting more than 1 million subscribers, Karpathy expressed his conviction that the pursuit of AGI is progressing at a considerably slower pace than the prevailing excitement indicates.
He contended that despite significant progress in large language models (LLMs) over the last three years, artificial general intelligence (AGI) is still at least ten years off, and cautioned that numerous firms are overstating AI's autonomous functionalities to a degree that Could harm the industry.
“Overall, the models are not there,” Karpathy said on the podcast. I believe the industry is advancing too rapidly and attempting to present this as exceptional when it's not. It's garbage.
The tech community reacted instantly to the interview, as anticipation for AGI has surged, mirroring increases in capital investment and competition.
“If this Karpathy interview doesn’t pop the AI bubble, nothing will,” Prithvir Jhaveri, CEO of prediction markets aggregator TradeFox, wrote on X.
John Coogan, host of the tech podcast TBPN, noted that Karpathy’s interview came just weeks after AI pioneer Richard Sutton called LLMs a “dead end.”
“The general tech community is experiencing whiplash right now,” Coogan wrote on X.
Karpathy, formerly Tesla's senior director of AI and an early leader at OpenAI, views his AI timeline as “five to ten times pessimistic”, differing significantly from many public forecasts. He dismissed the notion that his forecast, suggesting AGI will be realized in ten years, is pessimistic. “Ten years,” he wrote on X after the interview, “should otherwise be a very bullish timeline for AGI.”
For the tech industry, it’s a slow projection. Sam Altman, who co-founded OpenAI with Karpathy and now serves as its CEO, predicts predicts that artificial intelligence will exceed human intelligence across all fields by the year 2030. This year or next, Elon Musk anticipates the arrival of AGI.
Karpathy contended that a significant portion of the misunderstanding arises from metrics that overstate a system's abilities. He stated that public demonstrations, benchmark contests, chatbot interactions, and code-generation evaluations often showcase limited improvements instead of tackling AI's most challenging unresolved issues. These encompass long-term strategic planning, methodical deduction, and, in the end, secure system architecture.
Karpathy's most pointed critique was directed at AI “agents,”, a notion that has rapidly gained traction throughout the industry lately.
These systems, which are built upon LLMs, are presented as independent digital employees capable of writing and executing code, browsing the web, controlling software, and completing business operations with very little supervision. Karpathy acknowledged the concept's potential but stated that its current implementation is quite unreliable.
“We’re at this intermediate stage,” Karpathy said. “The models are amazing. They still need a lot of work.”
Numerous other AI executives express a more optimistic outlook. For example, Nvidia CEO Jensen Huang has called 2025 “the year of AI agents.” Anthropic CEO Dario Amodei recently said that by 2026 or 2027, AI systems will be “better than almost all humans at almost all Items.”
Karpathy cautioned that most current AI agent systems yield fragile, unpredictable outcomes and are deficient in fundamental dependability. He contended that they lack sufficient reasoning skills, have restricted understanding of software environments, and find it difficult to utilize tools appropriately.
“If this isn’t done well,” Karpathy said, “we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities [and] security breaches.”
He nevertheless maintained that AI is progressing along a long but solvable trajectory. He stated that the technical hurdles are significant but can be overcome through dedicated time, thorough research, and improved safety protocols.
“I feel like the problems are surmountable,” he said. “But they’re still difficult.”
