A Quiet Giant Makes a Big Splash
This week, the AI world lit up when DeepSeek, a Chinese AI developer, released two new open-source models—DeepSeek-V3 and DeepSeek-R1, plus a set of instructions called R1 Zero. These models deliver performance on par with top-tier offerings from OpenAI and Anthropic, but with a shockingly lower price tag.
Naturally, everyone is buzzing about two things:
The performance: DeepSeek’s models seem to match or exceed the capabilities of more famous systems.
The cost: They’re offering API access at a fraction of the usual rate.
This prompted excitement—plus a healthy dose of skepticism—online. Some wonder if DeepSeek quietly amassed restricted GPUs or if the Chinese government is behind some grand “psyop.” Either way, they’re proving that cutting-edge AI can be trained and run more cheaply than ever before.
I’m not surprised. For nearly a year, I’ve thought of the second order effects that China’s “hardware handicap” would drive innovation in new architecture.
Sure enough, the cost-efficiency of DeepSeek’s models has rattled many investors, with some proclaiming the “death of the AI trade.” Others remain bullish on big hardware suppliers like NVIDIA (NVDA).
In my view, both sides are missing the point: efficiency is a good thing, and NVIDIA isn’t AI—they’re a critical hardware enabler. Meanwhile, cheaper and more efficient models could unlock an even larger universe of AI applications.
Phase 1 vs. Phase 2: The Ongoing AI Thesis
Keep reading with a 7-day free trial
Subscribe to Blackshore’s Substack to keep reading this post and get 7 days of free access to the full post archives.