There’s a common saying in tech circles: The United States is good at innovation, going from zero to one, while China is good at commercial applications, that is, going from one to 100. For a while it seemed like the same would hold true for artificial intelligence (AI), where the most cutting-edge frontier models and research were created by U.S. startups like OpenAI, which were thought to be two to three years ahead of their Chinese counterparts. Yet the rapid release of two new models by Chinese company DeepSeek – the V3 in December and R1 this month – is upending this deep-rooted assumption, sparking a historic rout in U.S. tech stocks.
DeepSeek’s R1 reasoning model matches (and sometimes beats) OpenAI’s O1 across a range of math, code, and reasoning tasks – and at 2 percent of the latter’s price. A Chinese AI model is now as good as the leading U.S. AI models, using only a tiny fraction of GPU resources available.
This is remarkable and a gamechanger for the global AI arms race. One, this means that the game is no longer reserved for deep-pocketed players with chip stockpiles (like the United States and China). This was also a key American advantage, once thought to be a critical moat in maintaining the capability gap between U.S. and Chinese models. DeepSeek showed that algorithmic innovations can overcome scaling laws. Faced with limited chips due to U.S. export controls, the Chinese company employed innovative software optimization techniques, from sparse Mixture-of-Experts architectures to quantization, which allowed them to reach unprecedented cost efficiency while outperforming competing models.
As DeepSeek founder Liang Wenfeng, who is an AI researcher by training, said in an interview last year, “In the face of disruptive technologies, moats created by closed source are temporary. Even OpenAI’s closed source approach can’t prevent others from catching up.”
DeepSeek’s ability to catch up to frontier models in a matter of months shows that no lab, closed or open source, can maintain a real, enduring technological advantage. We’ve entered an era of AI competition where the pace of innovation is likely to become much more frenetic than we all expect, and where more small players and middle powers will be entering the fray, using the training strategies shared by DeepSeek.
Two, China is becoming the global leader in open source AI. DeepSeek is but one of many Chinese AI companies that are all fully open-sourcing their models – allowing developers worldwide to use, reproduce, and modify their model weights and methods. China’s Big Tech giant Alibaba has made Qwen, its flagship AI foundation model, open source. So have newer AI startups like Minimax, which also launched in January a series of open source models (both foundational and multimodal, that is, able to handle multiple types of media).
Competitive benchmark tests have shown that the performance of these Chinese open source models are on par with the best closed source Western models. On Hugging Face, an American platform that hosts a repository of open source tools and data, Chinese LLMs are regularly among the most downloaded. Not only does this bring more global developers into their ecosystem, but it also induces more innovation.
Think of an LLM as an operating system – akin to Apple’s iOS and Google’s Android – where users can develop new applications on top of it. Keeping the United States’ best models closed-source will mean that China is better poised to expand its technological influence in countries vying for access to the state-of-the-art offerings at a low cost. These Chinese AI companies are also ironically democratizing access to AI and keeping the original mission of OpenAI alive: advancing AI for the benefit of humanity. Countries outside of the AI superpowers or well-established tech hubs now have a shot at unlocking a wave of innovation using affordable training methods.
Three, U.S. export controls no longer have a stranglehold on AI progress. Chinese companies like DeepSeek have demonstrated the ability to achieve significant AI advancements by training their models on export-compliant Nvidia H800s – a downgraded version of the more advanced AI chips used by most U.S. companies – and by leveraging sophisticated software techniques. Much of the United States’ “chokepoint” tactics have thus far focused on hardware, but the fast-evolving landscape of algorithmic innovations means Washington may need to explore alternate routes of technology control. As many have pointed out, necessity is truly the mother of invention. Unable to rely on the latest chips, DeepSeek and others have been forced to do more with less and with ingenuity instead of brute force.
There’s no understating this milestone. While many had earlier counted China out on the AI race due to the barrage of crippling U.S. export controls, DeepSeek shows that China is back, and might be in the lead. If Western efforts to hamper or handicap China’s AI progress is likely to be futile, then the real race has only just begun: lean, creative engineering will be what wins the game; not sheer financial heft and export controls.