DeepSeek and the real impact of open source
- Victor Hugo Germano
- Jan 27
- 4 min read
The fish dies by its mouth , as my mother-in-law from Florianópolis used to say.
You don't need another explanation about what DeepSeek's R1 is, what DeepSeek is, and how it matters to the current state of AI development in the world. Let me offer another perspective: You've been deceived.

Open source, open weights, open research, open innovation. This is the greatest advancement that a research approach can bring to the world in the current moment of AI madness. Even with a few measly millions of dollars invested, using global research in Artificial Intelligence, a team was able to innovate by creating alternative solutions to the budget and infrastructure restrictions they had.
When an executive says at a conference that the best way out for the world is to spend more money on infrastructure, be suspicious. Be extremely skeptical. The entire organization's interest is in the quarterly report that shows an apparent increase in the value of the company's shares, and the CEO will say anything to keep the shares growing. Not necessarily lying, but CERTAINLY boosting market expectations to continue with the same accelerated multiples as always. This is how companies sustain themselves when they are worth more than 1 trillion dollars.

For these companies to continue their perpetual growth trajectory, they need an unattainable goal and a common enemy to be defeated. The enemy of the moment: adding value to the shareholder. No team with 5 billion per year to spend is going to create solutions that are not based on using money to buy more computing power. To justify this cost, only FUD: fear, uncertainty and doubt. National existential risk, risk of losing the race to AGI, positioning itself in global leadership - when deep down, all the discussion in recent years about the race to AGI has always been about ensuring that stocks remain overvalued.
R1 is not a surprise: it was inevitable . In my last post in December , about what to expect by 2025, I said that the open source alternative was the most interesting to the madness of unbridled construction of datacenters to meet an impossible demand for computing power. Even so, this wave not only accelerated: it trampled over any shred of sanity in the market, with the idea of Project Stargate , also announced last week.
The coverage this past week was more like a soccer World Cup final than a serious discussion about what happened: DeepSeek demonstrated the big problem in the dominant paradigm of AI development.
This has already been announced for at least a year !
So far, OpenAI and its peers have been trying to convince the public and politicians that scaling AI models indefinitely is the best way to achieve “AGI” ( their way, of course ). Regardless of the catastrophic consequences of using all the water and energy available on Earth to do so.
The case for scaling has always been based more on business than science. Scientifically, there is no law of physics that says AI advances need to come from greater scale rather than alternative approaches using the same or fewer resources. But for Business, this approach is perfectly suited to creating quarterly reports and revenue planning and leads to a clear path to eliminating the competition: More Chips.
What R1 demonstrates is that the general arguments for seeking Infinite Scale to the detriment of the planet do not hold up, and that we should indeed question this expectation.
For me, this new model of Reasoning, Open and Public, is perhaps the best possibility for a serious evolution and real adoption of AI tools without the corporate ties that have the sole objective of collecting information from us to train their own models.
Being able to run a high-performance model on your own local computer, without having to pay for a service that will have access to all your information, is the best way to go.
Furthermore, it is the best time for Brazilian research and development teams to look at the article describing the Reinforcement Learning and Numerical Stabilization process found by the DeepSeek team to apply to their own projects. I have no doubt that Meta and OpenAi themselves are currently evaluating the possibilities of using the same strategies generated by the DeepSeek team.
DeepSeek R1 doesn’t even need to be good enough to match the other Frontier Models. It has already accomplished more than any other approach could: opening the market’s eyes. I hope they stay open.
Science is made by sharing information in an open and collaborative way. It is through this exchange of results that we can have any chance of advancing our own technological capabilities . Just as OpenAi used the Google Brain study in the now famous paper Attention is All you need , DeepSeek uses all the research to date to invent innovative solutions, despite the limited budget and computing power. Necessity is, after all, the mother of all inventions.
Additionally: Smaller models become much more affordable and enjoyable with this new development approach and Edge AI:
Nvidia has gotten rich selling extremely well-designed shovels, for a gold rush that perhaps we shouldn't even be participating in. That's the biggest benefit of an open source model with this capability and output.
The problem: this doesn't positively affect the bottom line.
You can also run the models locally and connect them to your Obsidian :
Comments