• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIOpenAI

China’s DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI

Sharon Goldman
By
Sharon Goldman
Sharon Goldman
AI Reporter
Down Arrow Button Icon
Sharon Goldman
By
Sharon Goldman
Sharon Goldman
AI Reporter
Down Arrow Button Icon
August 21, 2025, 2:55 PM ET
Mobile phone displaying the logo of Chinese AI company DeepSeek.
DeepSeek’s new AI model is optimized for Chinese chips and priced to undercut OpenAI.Photo illustration by Cheng Xin—Getty Images

Chinese AI startup DeepSeek shocked the world in January with an AI model, called R1, that rivaled OpenAI’s and Anthropic’s top LLMs. It was built at a fraction of the cost of those other models, using far fewer Nvidia chips, and was released for free. Now, just two weeks after OpenAI debuted its latest model, GPT-5, DeepSeek is back with an update to its flagship V3 model that experts say matches GPT-5 on some benchmarks—and is strategically priced to undercut it.

Recommended Video

DeepSeek’s new V3.1 model was quietly released in a message to one of its groups on WeChat, China’s all-in-one messaging and social app, as well as on the Hugging Face platform. Its debut touches several of today’s biggest AI narratives at once. DeepSeek is a core part of China’s broader push to develop, deploy, and control advanced AI systems without relying on foreign technology. (And in fact, DeepSeek’s new V3 model is specifically tuned to perform well on Chinese-made chips.)

While U.S. Companies have been hesitant to embrace DeepSeek’s models, they’ve been widely adopted in China and increasingly in other parts of the world. Even some American firms have built applications on DeepSeek’s R1 reasoning model. At the same time, researchers warn that the models’ outputs often hew closely to Chinese Communist Party–approved narratives—raising questions about their neutrality and trustworthiness.

China’s AI push goes beyond DeepSeek: Its industry also includes models including Alibaba’s Qwen, Moonshot AI’s Kimi, and Baidu’s Ernie. DeepSeek’s new release, however, coming just after OpenAI’s GPT-5—a rollout that fell short of industry watchers’ high expectations—underscores Beijing’s determination to keep pace with, or even leapfrog, top U.S. Labs.

OpenAI is concerned about China and DeepSeek

DeepSeek’s efforts are certainly keeping U.S. Labs on their toes. In a recent dinner with reporters, OpenAI CEO Sam Altman said that rising competition from Chinese open-source models, including DeepSeek, influenced his company’s decision to release its own open-weight models two weeks ago. 

“It was clear that if we didn’t do it, the world was gonna be mostly built on Chinese open-source models,” Altman said. “That was a factor in our decision, for sure. Wasn’t the only one, but that loomed large.”

In addition, last week the U.S. Granted Nvidia and AMD licenses to export China-specific AI chips—including Nvidia’s H20—but only if they agree to hand over 15% of revenue from those sales to Washington. Beijing quickly pushed back, moving to restrict purchases of Nvidia chips after Commerce Secretary Howard Lutnick told CNBC on July 15: “We don’t sell them our best stuff, not our second-best stuff, not even our third-best.” 

By optimizing DeepSeek for Chinese-made chips, the company is signaling resilience against U.S. Export controls and a drive to reduce reliance on Nvidia. In DeepSeek’s WeChat post, it noted that the new model format is optimized for “soon-to-be-released next-generation domestic chips.” 

Altman, at that same dinner, warned that the U.S. May be underestimating the complexity and seriousness of China’s progress in AI—and said export controls alone likely aren’t a reliable solution.

“I’m worried about China,” he said.

Less of a leap, but still striking incremental advances

Technically, what makes the new DeepSeek model notable is how it was built, with a few advances that would be invisible to consumers. But for developers, these innovations make V3.1 cheaper to run and more versatile than many closed and more expensive rival models. 

For instance, V3.1 is huge—685 billion parameters, which is on the level of many top “frontier” models. But its “mixture-of-experts” design means only a fraction of the model activates when answering any query, keeping computing costs lower for developers . And unlike earlier DeepSeek models that split tasks that could be answered instantly based on the model’s pretraining from those that required step-by-step reasoning, V3.1 combines both fast answers and reasoning in one system.

GPT-5, as well as the most recent models from Anthropic and Google, have a similar ability. But few open-weight models have been able to do this so far. V3.1’s hybrid architecture is “the biggest feature by far,” Ben Dickson, a tech analyst and founder of the TechTalks blog, told Coins2Day. 

Others point out that while this DeepSeek model is less of a leap than the company’s R1 model—which was a reasoning model distilled down from the original V3 that shocked the world in January, the new V3.1 is still striking. “It is pretty impressive that they continue making non-marginal improvements,” said William Falcon, founder and CEO of AI developer platform Lightning AI. But he added that he would expect OpenAI to respond if its own open-source model “starts to meaningfully lag,” and pointed out that the DeepSeek model is harder for developers to get into production, while OpenAI’s version is fairly easy to deploy. 

For all the technical details, though, DeepSeek’s latest release highlights the fact that AI is increasingly seen as part of a simmering technological cold war between the U.S. And China. With that in mind, if Chinese companies can build better AI models for what they claim is a fraction of the cost, U.S. Competitors have reason to worry about staying ahead. 

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Sharon Goldman
By Sharon GoldmanAI Reporter
LinkedIn icon

Sharon Goldman is an AI reporter at Coins2Day and co-authors Eye on AI, Coins2Day’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.