- The New Rules with Alan Pentz
- Posts
- The Gloves Are Coming Off
The Gloves Are Coming Off
The New Rules of AI Acceleration
Welcome back to The New Rules
A quick message before we jump in:
If you’re reading this online, subscribe HERE and join over 9,000 people who want to understand the world better by receiving these emails every week.
And don’t forget about The New Rules podcast where I go deeper on my posts or bring in a guest whose ideas I find intriguing.
Let’s dive in…
Last week the world turned its attention away from the latest stream of consciousness issuing from the White House to panic about the release of DeepSeek, a Chinese AI app. The panic started because DeepSeek matched many of the capabilities of OpenAI and Anthropic’s most up to date public models.
What happened? Didn’t we sanction China and keep them from accessing NVIDIA’s top chips? The venture capitalist Marc Andreesen called it a Sputnik moment. Silicon Valley was abuzz. Over the weekend, DeepSeek’s app passed ChatGPT as the most downloaded in the Apple App Store. Then last Monday NVIDIA’s stock plummeted in the largest one day drawdown (in total dollar terms) in stock market history.
Is DeepSeek the app that was heard around the world? Is it Sputnik?
The answer seems unclear at this point. The real innovation of DeepSeek is that it reached parity with OpenAI and Anthropic’s models at a much lower cost, using less powerful chips, and it runs more efficiently.
Many dispute the reported facts, such as the widely cited $5.6m training cost compared to hundreds of millions for OpenAI. In reality, that figure only included the final training run. The total cost was much higher.
Second, DeepSeek could actually have access to many advanced NVIDIA chips. Singapore is apparently buying the most advanced NVIDIA chips in unusually large quantities raising speculation that many are going to China.
Third, it’s pretty clear that DeepSeek used a process called “distillation” to train its model. DeepSeek used OpenAI and Anthropic’s API to extract data from their leading models to train its own. It’s a lot cheaper than getting and training your own data and fits with China’s model of intellectual property theft.
However, according to reports, DeepSeek did make real advances in making AI training and inference (querying the model after training) much more efficient and cheaper to run. That’s a hugely helpful innovation for chip hungry and power hungry AI technology. But this isn’t a leap frogging of capability.
DeepSeek’s win will have wide spread effects on how AI develops going forward that I haven’t yet seen reflected in the media.
The Sad History of AI Paranoia
To date, AI companies like OpenAI, Anthropic (founded by former OpenAI employees), and Google have been extremely concerned about the problem of AI alignment. Alignment is a vague term but means something like ensuring that AI helps humans or maybe it’s more precise to say that alignment means that AI is under the control of humans since we certainly already use it to kill our enemies more effectively.
OpenAI has repeatedly claimed its decisions were focused on trying to develop a potentially dangerous technology in responsible ways. The company has courted policy makers and encouraged regulation from governments.
Pioneers like Geoffrey Hinton (often called the Godfather of AI) left Google claiming an early version of one of its AI models was sentient and now famously fears the technology he helped create.
AI has been regularly demonized by the press and convicted of pre-crimes like in this hyperbolic piece from Wired by Temnit Gebru which cringily ends with this paragraph:
We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites.
The only thing that can save us from certain destruction is the Māori principle of guardianship. Namaste….
This was from someone working on AI at Google! Too bad they can’t fire her twice.
All this angst led to what I consider incredibly premature regulation. The EU committed technological suicide with its ridiculous AI Act at the beginning of 2024. President Biden issued an Executive Order which (while largely non-binding) set some precedents that discourage innovation and California almost joined the EU in jumping off the regulatory cliff with its AI law (which the Governor fortunately vetoed).
I agree with the cynics in thinking that OpenAI and Anthropic were seeking regulation to ward off new competitors and lock in their advantages. It is clear that the zeitgeist in the AI community, government, and the press was focused on safety and fear of change over innovation in most cases. These companies were playing a delicate game balancing progress on the technology with the concerns of anti-progress community.
The Gloves Are Coming Off
The launch of DeepSeek will deal a fatal blow to the AI safety movement at least for the time being. Having a new competitor that plays by different rules spurs competition generally. OpenAI, Anthropic, Google, Microsoft, etc. will accelerate the release of models to show that they are still in the lead.
Nothing focuses the mind in business like the prospect of being an also ran. My guess is these companies have a few aces up their sleeves that they been holding back for various reasons and DeepSeek will force those aces onto the table. OpenAI has a new more advanced reasoning model coming soon. AI agents will make their debut in 2025. The pace of innovation will quicken.
While these companies have been working at a furious pace, they saw each other as their main competition. All OpenAI had to do was raise more money and use it to buy more GPUs from NVIDIA and hire all the talent from MIT. The others like Anthropic and xAI could chase and keep close behind but it seemed like the playfield and the rules were set.
DeepSeek showed that players with far less capital and talent could catch up quickly; in that sense, it is a Sputnik moment. These companies, now in partnership with a far more supportive Administration in the US, will break down any barrier to further AI progress to stay ahead.
At the moment the implications of DeepSeek set in, it was clear a new game had started and my take is these guys are ready for it. No more hemming and hawing about “guardianship”. The gloves are coming off and it’s win at all costs.
This is Sam Altman (OpenAI’s CEO) after seeing NVIDIA down 20% in one day:

Resources Will Get Mobilized
Now that DeepSeek has shown that smaller, less-resourced companies can make a dent, it increases the likelihood that venture capital begins pouring money into smaller players. Who wants to fund a company knowing it’ll just get crushed by the OpenAI/Microsoft behemoth?
Now you might have a shot at funding the next DeepSeek for $5, $10, or $20 million versus begging to get into OpenAI’s $6 billion round at an already inflated valuation. Where else can smaller players employ new architectures and approaches to take chunks out of the large players’ hides?
DeepSeek will also spur the government. Sputnik launched decades of US investment into not only the space program but also STEM education. There was a sense that we had fallen behind the Soviets in key technologies like space travel and in technical talent across the board. That was probably correct at the time. The response was swift and massive.
Congress formed NASA and created huge programs to fund STEM education. Kennedy set reaching the moon as the goal. Within a few years, the US blew past the Soviets.
In fact we were so successful that the competitive pressure lessened and the US allowed the space program to drift by the late 1970s. The after effects of that funding led to the personal computer, the Internet, and now to AI. This new impetus will inspire similar levels of innovation.
The Administration will have to move from a stance of favoring just tariffs and export controls to actively spending more money to support AI research and deployment. Again nothing focuses the mind like competition and the Chinese showed they can compete.
The US enjoys huge advantages in the AI race. We have more AI talent and better hardware and software. The Chinese showed what scrappy and resourced-constrained competitors can accomplish when motivated. I say, “Well played, but this game isn’t over yet.”
Now, let’s roll.
Keep learning,
Alan

P.S.
Reply