Sense and Sensibility in the age of Reasoning AI
Or, what would Jane Austen say about DeepSeek R1
The interwebs have been abuzz with the release of DeekSeep-R1, a competitor to OpenAI o1 reasoning model. As is usual with this market, plenty of hype and hysteria dominate the airwaves. I want to take a step back, put a product manager (and amateur historian) hat on, and draw some parallels to the past and sensible projections to the future.
The short version is that, just like the novel by Austen, we should exercise some balance between sense (reason) and sensibility (emotion). Leaning into either booster or doomer perspectives is counter productive. Like Stockdale, confronting reality is the key to success. In this case, things will play out rather predictably mediocre.

The predictability of DeepSeek
I’m not going to delve into the technical details of what DeepSeek has done. There are plenty of good articles about the subject — e.g. this one from VentureBeat, or those from
on and the podcast, or this excellent 5-hour-long deepdive from Lex Friendman which is amazingly nuanced and comprehensive. But the math and hardware magic aren’t important for the discussion below.Let’s start with a bit o’ history. For a long time, it was thought to be physically impossible for a human to run a mile in under 4 minutes (a pace equating to sustaining a speed of over 24kph for the duration). In 1956, Roger Bannister in Oxford broke that barrier. His records lasted 46 days, when it was repeated by an Australian runner in Finland.
We can take many lessons from this (not excluding that I’ve chosen a particular narrative which disregards previous efforts and claims), but the salient one here is that once shown the way, others will follow swiftly. This goes to both any advancements offered by the big model providers like OpenAI, as well as any innovations in efficiency DeepSeek has illustrated. Which means we’ll see more cheaper-and-better models in the coming months, and any advantages will be short lived.
This is a good thing. The big labs had one main strategy — throw money at the problem — for a over a decade. While there are certainly interesting things that happen as the models scale, there are diminishing returns. So far the emergent properties of models fall much short of AGI, and just throwing even more money at the problem (in the form of larger data sets and compute) doesn’t seem to get us closer to there. While Eric Schmidt, ex-CEO of Google, thinks that screw the climate we should pour more energy in the hope that something would emerge nad maybe save us if it’s not too late, I disagree. From here, it looks like a who’s got the bigger spaceship AI-model pissing contest amongst billionaires.
On the other side, with DeepSeek we’re now seeing how constraint-driven creativity leads to ingenuity and solutions. Not everything that they have done will make sense to other model providers, but the idea of maximising resources and doing better at smaller expenditure will prevail. That’s because waste has never been a good business strategy, and efficiency wins in the medium and long runs.
Which is also coincidentally why the trade restrictions on computer chips will aid China in the long run. Compare to the similar restrictions in the 1980’s and 90’s on the USSR and then Russia. It’s true that Russia’s computer chip industry collapsed. But then, Russia was also busy imploding during this period. On the other hand Eastern European programmers got really good at writing efficient code, so much that to this day they are some of the best programmers worldwide. China, unlike the USSR, has all the resources to build a chip industry and empower their math and programming efforts. Another case of short-term thinking.
For what it’s worth, I don’t think DeepSeek itself will dislodge the incumbents. In the current climate, not many would find PRC-controlled models and services palatable — even without the security holes caused by a small team rushing to market. Outside of consumer toys, big organisations (both governments and private enterprises) in the Western world will shy away from it.
But they won’t shy from a western-built small, efficient, and cheap model. Ed Zitron has a fun scenario: Zuckerberg has Meta build efficient model comparable to OpenAI’s o1 and release it as open weights (they already partially open LLaMa like that), just to topple the incumbents.
Even if not, it has been my belief for a while that scaling will plateau on an S-curve and that we have passed the mid point; it will take a different breakthrough to advance. While we wait for the next jump, if efficient coding deliver models at a fraction of the cost, then what happens to all the billions already sunk into training the current generation of AI models? What happens to the companies who spent those billions?
First Mover Advantage has never been the advantage it’s made out to be.
So where does that leave product managers?
Much depends on the markets you serve and the nature of your product. DeepSeek may or may not be the right choice for you depending on the regulations you and your customers are subjected to. With the general commoditisation of the model market and the risk that the incumbents’ funding may suddenly dry (let alone the havoc from the international trade war we’re heading towards), keeping to model agnosticism is warranted.
That is also likely a good idea for another PM101 reason — you should be offering something that is more than a thin wrapper, something that solves a real problem in a fit-for-purpose way. Adding AI for the sake of it or overly relying on a particular vendor without providing something unique, isn’t sound product strategy. When trying to innovate, remember the Capability Gap: using a technology that can do something isn’t always great user experience unless is can do so reliably. This will directly impact the Adoption Gap you’ll need to cross next.
And just like in Jane Austen’s time, you need to balance Sense and Sensibility. You cannot rely on “pure reason” alone (letting data dictate all decisions, pursuing efficiency and automation to a local maximum), nor on “emotional response” alone (obsession with shiny marketing stories, chasing the latest shiny object in the market). You should strike a balance, using AI to augment humans rather than replace, use it as inputs for both analytics and brainstorming without replacing critical thinking, balancing potential benefits and worst-case scenarios.
In short, take a step back and a deep breath, and apply some fundamental product management. This hype-cycle will pass just in time for the next one to peak, and it’s your job to navigate your product to long-term success.